• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    26 days ago

    I managed to get an AI to build pong in assembly. Are are pretty cool things, but not sci-fi level just yet, but I didn’t just say “build pong in assembly”, I have to hand hold it a little bit. You need to be a programmer to understand how to guide the AI to do the task.

    That was something very simple, I doubt that you can get it to do more complex tasks without a more lot of back and forth.

    To give you an example I had a hard time getting it to understand that the ball needed to bounce off at an angle if intercepted at an angle, it just kept snapping it to 90° increments. I couldn’t fix it myself because I don’t really know assembly well enough to really get into the weeds with it so I was sort of stuck until I was finally able to get the AI to do what I wanted it to. I sort of understood what the problem was, there was a number somewhere in the system and it needed to make the number negative, but it just kept setting the number to a value. A non-programmer wouldn’t really understand that’s what the problem was and so they wouldn’t be able to explain to the AI how to fix it.

    I believe AI is going to become an unimaginably useful tool in the future and we probably don’t really yet understand how useful it’s going to be. But unless they actually make AGI it isn’t going to replace programmers.

    If they do make AGI all bets are off it will probably go build a Dyson Sphere or something at that point and we will have no way of understanding what it’s doing.

    • VantaBrandon@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      I tried to get it to build a game of checkers, spent an entire day on it, in the end I could have built the thing myself. Each iteration got slightly worse, and each fix broke more than it corrected.

      AI can generate an “almost-checkers” game nearly perfectly every time, but once you start getting into more complex rules like double jumping it just shits the bed.

      What these headlines fail to capture is that AI is exceptionally good at bite sized pre-defined tasks at scale, and that is the game changer. Its still very far from being capable of building an entire app on its own. That feels more like 5-10 years out.

    • Hroderic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      Yeah, I don’t see AI replacing any developers working on an existing, moderately complex codebase. It can help speed up some tasks, but it’s far from being able to take a requirement and turn it into code that edits the right places and doesn’t break everything.

  • SuperiorOne@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    ‘Soon’ is a questionable claim from a CEO who sells AI services and GPU instances. A single faulty update caused worldwide down time recently. Now, imagine all infrastructure is written with today’s LLMs - which are sometimes hallucinate so bad, they claim ‘C’ in CRC-32C stands for ‘Cool’.

    I wish we could also add a “Do not hallucinate” prompt to some CEOs.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    AI is terrible at solving real problems thru programming. As soon as the problem is not technical in nature and needs a decision to be made based on experience, it falls flat on its face.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      It will never understand context and business rules and things of that nature to the same extent that actual devs do.

  • BigBenis@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    Lol, as a programmer who uses generative AI myself, I would genuinely love to see them try.

  • SparrowRanjitScaur@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    26 days ago

    Extremely misleading title. He didn’t say programmers would be a thing of the past, he said they’ll be doing higher level design and not writing code.

    • Tyfud@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      Even so, he’s wrong. This is the kind of stupid thing someone without any first hand experience programming would say.

      • SparrowRanjitScaur@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        26 days ago

        Not really, it’s doable with chatgpt right now for programs that have a relatively small scope. If you set very clear requirements and decompose the problem well it can generate fairly high quality solutions.

        • OmnislashIsACloudApp@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          26 days ago

          right now not a chance. it’s okay ish at simple scripts. it’s alright as an assistant to get a buggy draft for anything even vaguely complex.

          ai doing any actual programming is a long ways off.

        • Tyfud@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          26 days ago

          This is incorrect. And I’m in the industry. In this specific field. Nobody in my industry, in my field, at my level, seriously considers this effective enough to replace their day to day coding beyond generating some boiler plate ELT/ETL type scripts that it is semi-effective at. It still contains multiple errors 9 times out of 10.

          I cannot be more clear. The people who are claiming that this is possible are not tenured or effective coders, much less X10 devs in any capacity.

          People who think it generates quality enough code to be effective are hobbyists, people who dabble with coding, who understand some rudimentary coding patterns/practices, but are not career devs, or not serious career devs.

          If you don’t know what you’re doing, LLMs can get you close, some of the time. But there’s no way it generates anything close to quality enough code for me to use without the effort of rewriting, simplifying, and verifying.

          Why would I want to voluntarily spend my day trying to decypher someone else’s code? I don’t need chatGPT to solve a coding problem. I can do it, and I will. My code will always be more readable to me than someone else’s. This is true by orders of magnitude for AI-code gen today.

          So I don’t consider anyone that considers LLM code gen to be a viable path forward, as being a serious person in the engineering field.

          • SparrowRanjitScaur@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            26 days ago

            It’s just a tool like any other. An experienced developer knows that you can’t apply every tool to every situation. Just like you should know the difference between threads and coroutines and know when to apply them. Or know which design pattern is relevant to a given situation. It’s a tool, and a useful one if you know how to use it.

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              26 days ago

              This is like applying a tambourine made of optical discs as a storage solution. A bit better cause punctured discs are no good.

              A full description of what a program does is the program itself, have you heard that? (except for UB, libraries, … , but an LLM is no better than a human in that too)

        • Tyfud@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          26 days ago

          They’re falling for a hype train then.

          I work in the industry. With several thousand of my peers every day that also code. I lead a team of extremely talented, tenured engineers across the company to take on some of the most difficult challenges it can offer us. I’ve been coding and working in tech for over 25 years.

          The people who say this are people who either do not understand how AI (LLMs in this case) work, or do not understand programming, or are easily plied by the hype train.

          We’re so far off from this existing with the current tech, that it’s not worth seriously discussing.

          There are scripts, snippets of code that vscode’s llm or VS2022’s llm plugin can help with/bring up. But 9 times out of 10 there’s multiple bugs in it.

          If you’re doing anything semi-complex it’s a crapshoot if it gets close at all.

          It’s not bad for generating psuedo-code, or templates, but it’s designed to generate code that looks right, not be right; and there’s a huge difference.

          AI Genned code is exceedingly buggy, and if you don’t understand what it’s trying to do, it’s impossible to debug because what it generates is trash tier levels of code quality.

          The tech may get there eventually, but there’s no way I trust it, or anyone I work with trusts it, or considers it a serious threat or even resource beyond the novelty.

          It’s useful for non-engineers to get an idea of what they’re trying to do, but it can just as easily send them down a bad path.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            26 days ago

            People use visual environments to draw systems and then generate code for specific controllers, that’s in control systems design and such.

            In that sense there are already situations where they don’t write code directly.

            But this has nothing to do with LLMs.

            Just for designing systems in one place visual environments with blocks might be more optimal.

            • Miaou@jlai.lu
              link
              fedilink
              English
              arrow-up
              0
              ·
              25 days ago

              And often you still have actual developers reimplementing this shit because EE majors don’t understand dereferencing null pointers is bad

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        26 days ago

        Yeah, there are people who can “in general” imagine how this will happen, but programming is exactly 99% not about “in general” but about specific “dumb” conflicts in the objective reality.

        People think that what they generally imagine as the task is the most important part, and since they don’t actually do programming or anything requiring to deal with those small details, they just plainly ignore them, because those conversations and opinions exist in subjective bendable reality.

        But objective reality doesn’t bend. Their general ideas without every little bloody detail simply won’t work.

    • cheddar@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      So they would be doing engineering and not programming? To me that sounds like programmers would be a thing of the past.

    • realharo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      26 days ago

      Sounds like he’s just repeating a common meme. I don’t see anything about higher level design that would make it more difficult for an AI (hypothetical future AI, not the stuff that’s available now) compared to lower level tasks.

  • Weirdfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    20 years ago at a trade show, a new module based visual coding tool was introduced in my field which claimed “You’ll never need another programmer”.

    Oddly enough, I still have a job.

    The tools have gotten better, but I still write code every day because procedural programming is still the best way to do things.

    It is just now reaching the point that we can do some small to medium scale projects with plug and play systems, but only with very specific equipment and configurations.

    • ZephyrXero@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      25 days ago

      20 years ago while learning web development Dreamweaver was going to supposedly eliminate the need for code on websites too. lol

      But sadly, the dream of eliminating us seems like it will never die

    • yarr@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      20 years ago at a trade show, a new module based visual coding tool was introduced in my field which claimed “You’ll never need another programmer”.

      It’s because people trying to sell silver bullets is nothing new.

  • auzy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    Yeah nah. We already have copilot and it introduces so many subtle bugs.

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    25 days ago

    To predict what jobs AI will replace, you need to know both of the following:

    1. What’s special about the human mind that makes people necessary for completing certain tasks
    2. What AI can do to replicate or replace those special features

    This guy has an MA in industrial engineering and an MBA, and has been in business his whole career. He has no knowledge of psychology and whatever knowledge of AI that he’s picked up on the side as part of his work.

    He’s not the guy to ask. And yet, I feel like this is the only kind of guy anyone asks.

  • Vilian@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    The first thing AI gonna replace is CEO, dumb ass job, Mac Donald employer require more expertise

  • Kualk@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    Current AI is good at compressing knowledge.

    Best job role: information assistant or virtual secretary.

  • Feyd@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    Meanwhile, llms are less useful at helping me write code than intellij was a decade ago

    • tzrlk@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      25 days ago

      I’m actually really impressed with the auto complete intellij is packaged with now. It’s really good with golang (probably because golang has a ton of code duplication).

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    And anyone who believes that should be fired, because they don’t understand the technology at all or what is involved in programming for that matter. At the very least it should make everyone question the company if its leadership doesn’t understand their own product.

  • letsgo@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    Don’t worry guys. As long as project managers think “do the thing … like the thing … (waves hands around) … you know … (waves hands around some more) … like the other thing … but, um, …, different” constitutes a detailed spec, we’re safe.