Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path::Don’t learn to code advises Jensen Huang of Nvidia. Thanks to AI everybody will soon become a capable programmer simply using human language.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Well. That’s stupid.

    Large language models are amazingly useful coding tools. They help developers write code more quickly.

    They are nowhere near being able to actually replace developers. They can’t know when their code doesn’t make sense (which is frequently). They can’t know where to integrate new code into an existing application. They can’t debug themselves.

    Try to replace developers with an MBA using a large language model AI, and once the MBA fails, you’ll be hiring developers again - if your business still exists.

    Every few years, something comes along that makes bean counters who are desperate to cut costs, and scammers who are desperate for a few bucks, declare that programming is over. Code will self-write! No-code editors will replace developers! LLMs can do it all!

    No. No, they can’t. They’re just another tool in the developer toolbox.

    • paf0@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I’ve been a developer for over 20 years and when I see Autogen generate code, decide to execute that code and then fix errors by making a decision to install dependencies, I can tell you I’m concerned. LLMs are a tool, but a tool that might evolve to replace us. I expect a lot of software roles in ten years to look more like an MBA that has the ability to orchestrate AI agents to complete a task. Coding skills will still matter, but not as much as soft skills will.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        I really don’t see it.

        Think about a modern application. Think about the file structure, how the individual sources interrelate, how non-code assets are stored, how applications are deployed, and all the other bits and pieces that go into an application. An AI can’t know any of that without being trained - by a human - on the specifics of that application’s needs.

        I use Copilot for my job. It’s very nice, and makes my job easier. And if my boss fired me and the rest of the team and tried to do it himself, the application would be down in a day, then irrevocably destroyed in a week. Then he’d be fired, we’d be rehired, and we - unlike my now-former boss - would know things like how to revert the changes he made when he broke everything while trying to make Copilot create a whole new feature for the application.

        AI code generation is pretty cool, but without the capacity to know what code actually should be generated, it’s useless.

        • paf0@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          It’s just going to create a summary story about the code base and reference that story as it implements features, not that different that a human. It’s not necessarily something it can do now but it will come. Developers are not special, and I was never talking about Copilot.

          • kescusay@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            I don’t think most people grok just how hard implementing that kind of joined-up thinking and metacognition is.

            You’re right, developers aren’t special, except in those ways all humans are, but we’re a very long way indeed from being able to simulate them in AI - especially in large language models. Humans automatically engage in joined-up thinking, second-order logic, and so on, without having to consciously try. Those are all things a large language model literally can’t do.

            It doesn’t know anything. It can’t conceptualize a “summary story,” or understand parts that it might get wrong in such a story. It’s glorified autocomplete.

            And that can be extraordinarily useful, but only if we’re honest with ourselves about what it is and is not capable of.

            Companies that decide to replace their developers with one guy using ChatGPT or Gemini or something will fail, and that’s going to be true for the foreseeable future.

            • paf0@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              Try for a second to think beyond what they’re able to do now and think about the future. Also, educate yourself on Autogen and CrewAI, you actually haven’t addressed anything I said because you’re too busy pontificating.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Well, I sometimes see a few tools at my job, which are supposed to be kinda usable by people like that. In reality they can’t 90% of time.

        That’d be because many people think that engineers deal in intermediate technical details, and the general idea is clear for this MBA. In fact it’s not.

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    You remember when everyone was predicting that we are a couple of years away from fully self-driving cars. I think we are now a full decade after those couple of years and I don’t see any fully self driving car on the road taking over human drivers.

    We are now at the honeymoon of the AI and I can only assume that there would be a huge downward correction of some AI stocks who are overvalued and overhyped, like NVIDIA. They are like crypto stock, now on the moon tomorrow, back to Earth.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Quantuum computing is going to make all encryption useless!! Muwahahahahaaa!

      . . . Any day now . . Maybe- ah! No, no thought this might be the day, but no, not yet.

      Any day now.

        • Podginator@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          If you were able to generate near life-like images and simulacrams of human speech why would you tell anyone?

          Money. The answer is money.

          Quantum computing wouldn’t be developed just to break encryption, the exponential increase in compute power would fuel a technological revolution. The encryption breaking would be the byproduct.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      Two decades. DARPA Grand Challenge was in 2004.

      Yeah, everybody always forgets the hyper cycle and the peak of inflated expectations.

    • paf0@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Waymo exists and is now moving passengers around in three major cities. It’s not taking over yet, but it’s here and growing.The timeframe didn’t meet the hype but the technology is there.

      • filister@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Yes, the technology is there but it is not Level 5, it is 3.5-4 at best.

        The point with a full self-driving car is that complexity increases exponentially once you reach 98-99% and the last 1-2% are extremely difficult to crack, because there are so many corner cases and cases you can’t really predict and you need to make a car that drives safer than humans if you really want to commercialize this service.

        Same with generative AI, the leap at first was huge, but comparing GPT 3.5 to 4 or even 3 to 4 wasn’t so great. And I can only assume that from now on achieving progress will get exponentially harder and it will require usage of different yet unknown algorithms and models and advances will be a lot more modest.

        And I don’t know for you but ChatGPT isn’t 100% correct especially when asking more niche questions or sending more complex queries and often it hallucinates and sometimes those hallucinations sound extremely plausible.

  • 3volver@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Don’t tell me what to do. Going to spend more time learning to code from now on, thanks.

  • OleoSaccharum@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Nvidia is such a stupid fucking company. It’s just slapping different designs onto TSMC chips. All our “chip companies” are like this. In the long run they are all going to get smoked. I won’t tell you by whom. You shouldn’t need a reminder.

    • ammonium@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Designing a chip is something completely different from manufacturing them. Your statement is as true as saying TSMC is such a stupid company, all they are doing is using ASML machines.

      And please tell me, I have no clue at all who you’re talking about.

      • OleoSaccharum@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Except what NVIDIA is doing can be done by numerous other chip design firms, TSMC cannot be replaced.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        The Chinese? I think their claim to fame is making processes stolen from TSMC work using pre-EUV lithography. Expensive AF because slow but they’re making some incredibly small structures considering the tech they have available. Russians are definitely out of the picture they’re in the like 90s when it comes to semiconductors and can’t even do that at scale.

        And honestly I have no idea where OP is even from, “All our chip companies”. Certainly not the US not at all all US chip companies are fabless: IBM, Ti and Intel are IDMs. In Germany IDMs predominate, Bosch and Infineon though there’s of course also some GlobalFoundries here, that’s pure play, so will be the TSMC-Bosch-NXP-Infineon joint venture ESMC. Korea and Japan are also full of IDMs.

        Maybe Britain? ARM is fabless, OTOH ARM is hardly British any more.

        • OleoSaccharum@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Amazon is fabless for their chip design unit, there all little mini design units for shit like datacenters.

          It’s hilarious you’re saying that because Intel labelled itself an investor in USA foundry projects you think they are exempt from this. Okay man, go work at the plants in Ohio and Arizona. Oh wait, they don’t fucking exist bruh

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            4 months ago

            Intel, Ti and IBM all made chips before pure-play and fabless were even a thing, and are still doing so. Intel has 16 fabs in the US, Ti 8, IBM… oh, they sold their shit, I thought they still had some specialised stuff for their mainframes. Well, whatever.

            Of all companies, the likes of Amazon and Google not fabbing their own chips should hardly be surprising. They’re data centre operators, they don’t even sell chips, if they set up fabs they’d have to start doing that, or compete with TSMC to not have idle capacity standing around costing tons of money.

            And that’s only really looking at logic chips, e.g. Micron also has fabs at home in the US.

            • OleoSaccharum@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              4 months ago

              None of those companies even make a blip on global chip production though. Are they for research or something? Why should I give a shit about a tiny technically existing fraction of production that will never expand?

              Go look at where there has been actual foundry production for decades. None of the companies you mentioned even exist in foundry. Who cares if they have A facility or two? That’s just part of figuring out what they’re going to order from TSMC.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                4 months ago

                Your goalposts, they are moving.

                The US has the know-how to produce modern chips at scale, or at least not too far behind in strategic terms. You could bring all production home if that’s what you wanted, it’d cost a lot of money but it’s simply a policy issue. And Amazon wouldn’t suddenly start to run fabs they’d hire capacity from Intel or whomever.

                …you’d still be reliant on European EUV machines, though. Everyone is, if you intend to produce very modern chips at scale. But if your strategic interest is making sure that the DMV has workstations and the military guidance computers that’s not necessary, pre-EUV processes are perfectly adequate.

                • OleoSaccharum@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  4 months ago

                  You are the one moving the goalposts with your boasts about how these companies make up LITERALLY an INFINITESIMAL portion of global chip production. Even if you cut out Samsung and TSMC they wouldn’t be global players.

                  No, we can’t just bring all production home lol. We’ve been saying we will for years. Where is the foundry in Ohio dude? Where is the Arizona foundry that’s supposed to bolster TSMC production?

                  Lol yeah sure go ask ASML how their business is doing rn in light of the US chip war sanctions. European manufacturing is in as dire a state as the US now due to financialization and now the skyrocketing energy costs.

                  People said this about our military production too. “Oh, Russia messed up now, we’re going to get serious and amp up our military production.” 🦗🦗🦗🦗🦗🗓️🗓️🗓️🗓️ (time loudly passing and nothing happening)

                  How many times is it going to take for people to learn it gets transmuted directly into stock buybacks lmao? We don’t have the electrical grid to build up our manufacturing base in the modern world yet. The US is a giant casino for the elite of our empire full of slums.

              • ammonium@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                4 months ago

                None of those companies even make a blip on global chip production though

                Neither does TSMC, high end chips is just a tiny part of the number of chips (albeit an important and lucrative part of the market).

                TSMC is alone at the top is because it’s so damn expensive and the market is not that big, there’s basically no place for a competitor. Anyone trying to dethrone them has to have very deep pockets and a good reason not to simply buy from TSMC. The Chinese might be able to pull it off, they have the money and a good reason.

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Human can probably still look forward to back breaking careers of manual labor that consist of complex varied movements!

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      At best, in the near term (5-10 years), they’ll automate the ability to generate moderate complexity classes and it’ll be up to a human developer to piece them together into a workable application, likely having to tweak things to get it working (this is already possible now with varying degrees of success/utter failure, but it’s steadily improving all the time). Additionally, developers do far more than just purely code. Ask any mature dev team and those who have no other competent skills outside of coding aren’t considered good workers/teammates.

      Now, in 10+ years, if progress continues as it has without a break in pace… Who knows? But I agree with you, by the time that happens with high complexity/high reliability for software development, numerous other job fields will have already become automated. This is why legislation needs to be made to plan for this inevitability. Whether that’s thru UBI or some offshoot of it or even banning automation from replacing major job fields, it needs to be seriously discussed and acted upon before it’s too little too late.

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I don’t see how it would be possible to completely replace programmers. The reason we have programming languages instead of using natural language is that the latter has ambiguities. If you start having to describe your software’s behaviour in natural language, then one of three things can happen:

    1. either this new natural programming language has to make assumptions about what you intend, and thus will only be capable of outputting a certain class of software (i.e. you can’t actually create anything new),
    2. or you need to learn a new way of describing things unambiguously, and now you’re back to programming but with a new language,
    3. or you spend forever going back and forth with the generator until it gives you the output you want, and this would take a lot longer to do than just having an experienced programmer write it.
    • ReplicaFox@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      And if you don’t know how to code, how do you even know if it gave you the output you want until it fails in production?

    • model_tar_gz@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      But that’s not what the article is getting at.

      Here’s an honest take. Let me preface this with some credential: I’m an AI Engineer with many years in field. I’m directly working on right now multiple projects that augment and automate code generation, documentation, completion and even system design/understanding. We’re not there yet. But the pace of progress in how fast we are improving our code-AI is astounding. Exponential growth in capability and accuracy and utility.

      As an anecdotal example; a few years ago I decided I would try to learn Rust (programming language), because it seemed interesting and we had a practical use case for a performant, memory-efficient compiled language. It didn’t really work out for me, tbh. I just didn’t have the time to get very fluent with it enough to be effective.

      Now I’m on a project which also uses Rust. But with ChatGPT and some other models I’ve deployed (Mixtral is really good!) I was basically writing correct, effective Rust code within a week—accepted and merged to main.

      I’m actively using AI code models to write code to train, fine-tune, and deploy AI code models. See where this is going? That’s exponential growth.

      I honestly don’t know if I’d recommend to my young kids programming as a career now even if it has been very lucrative for me and will take me to my retirement just fine. It excites me and scares me at the same time.

      • rolaulten@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        There is more to a program then writing logic. Good engineers are people who understand how to interpret problems and translate the inherent lack of logic in natural language into something that machines are able to understand (or vice versa).

        The models out there right now can truly accelerate the speed of that translation - but translation will still be needed.

        An anecdote for an anecdote. Part of my job is maintaining a set of EKS clusters where downtime is… undesirable (five nines…). I actively use chatgpt and copilot when adjusting the code that describes the clusters - however these tools are not able to understand and explain impacts of things like upgrading the control plane. For that you need a human who can interpret the needs/hopes/desires/etc of the stakeholders.

        • model_tar_gz@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Yeah I get it 100%. But that’s what I’m saying. I’m already working on and with models that have entire codebase level fine-tuning and understanding. The company I work at is not the first pioneer in this space. Problem understanding and interpretation— all of what you said is true— there are causal models being developed (I am aware of one team in my company doing exactly that) to address that side of software engineering.

          So. I don’t think we are really disagreeing here. Yes, clearly AI models aren’t eliminating humans from software today; but I also really don’t think that day is all that far away. And there will always be need for humans to build systems that serve humans; but the way we do it is going to change so fundamentally that “learn C, learn Rust, learn Python” will all be obsolete sentiments of a bygone era.

          • rolaulten@startrek.website
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            Let’s be clear - current AI models are being used by poor leadership to remove bad developers (good ones don’t tend to stick around). This however does place some pressure on the greater tech job market (but I’d argue no different then any other downturn we have all lived through).

            That said, until the issues with being confidently incorrect are resolved (and I bet people a lot smarter then me are tackling the problem) it’s nothing better then a suped up IDE. Now if you have a public resources you can point me to that can look at a meta repo full of dozens of tools and help me convert the python scripts that are wrappers of wrappers( and so on) into something sane I’m all ears.

            I highly doubt we will ever get to the point where you don’t need to understand how an algorithm works - and for that you need to understand core concepts like recursion and loops. As humans brains are designed for pattern recognition - that means writing a program to solve a sodoku puzzle.

  • Cosmicomical@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    Just because you’re the CEO of a big company, it doesn’t mean you know what you’re talking about. In this case it’s clear he doesn’t. You may say “but the company makes a lot of money” and that’s not a point in his favor either, as this is a clear example of survivor bias. Coding is going nowhere and the companies laying off people are just a proof CEOs don’t know what they are doing.

    For years there have been open source solutions ready for basically any purpose, and if that has not made coders useless, nothing will. Maybe they will change designation, but people that understand what’s going on at a technical level will always be necessary.

    There have been some situations in the past few years that made the situation less clear-cut, but that doesn’t make coders optional.

  • Wooki@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    This overglorified snake oil salesmanman is scared.

    Anyone who understands how these models works can see plain as day we have reached peak LLM. Its not enshitifying on itself and we are seeing its decline in real time with quality of generated content. Dont believe me? Go follow some senior engineers.

      • Wooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        You asked the question already answered. Pick your platform and you will find a lot of public research on the topic. Specifically for programming even more so

      • thirteene@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        There is a reason they didn’t offer specific examples. LLM can still scale by size, logical optimization, training optimization, and more importantly integration. The current implementation is reaching it’s limits but pace of growth is also happening very quickly. AI reduces workload, but it is likely going to require designers and validators for a long time.

        • Wooki@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 months ago

          For sure evidence is mounting that model size benefit is not returning the quality expected. Its also had the larger net impact of enshitifying itself with negative feedback loops between training data, humans and back to training. This one being quantified as a large declining trend in quality. It can only get worse as privacy, IP laws and other regulations start coming into place. The growth this hype master is selling is pure fiction.

          • msage@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            But he has a lot of product to sell.

            And companies will gobble it all up.

            On an unrelated note, I will never own a new graphics card.

            • Wooki@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              Secondhand is better value, still new cost right now is nothing short of price fixing. You only need look at the size reduction in memory since A100 was released to know what’s happening to gpu’s.

              We need serious competition, hopefully intel is able to but foreign competition would be best.

              • msage@programming.dev
                link
                fedilink
                English
                arrow-up
                0
                ·
                4 months ago

                I doubt that any serious competitor will bring any change to this space. Why would it - everyone will scream ‘shut up and take my money’.

  • madcaesar@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    This seems as wise as Bill Gates claiming 4MB of ram is all you’ll ever need back on 98 🙄

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I’m so sick of the hyper push for specialization, it may be efficient but it’s soul crushing. IDK maybe it’s ADHD but I’d rather not do any one thing for more than 2 weeks.

    • wahming@monyet.cc
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      What hyper push? I can’t think of a time in history ever when somebody with two weeks of experience was in demand

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Most people got jobs with no experience, or even education less than 40 years ago as long as they showed up and acted confident. Nowadays entry level internships want MScs and years of work xp with something that was invented yesterday

        • wahming@monyet.cc
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          You mistake ‘businesses were more willing to take a chance’ for ‘businesses didn’t want experience’. Unless it’s a bullshit retail job, anybody with no experience is a net burden their first month.

  • slappy@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Don’t listen to electrical engineers, unless they are warning against licking a battery.

    Do listen to whatever your gut says you want to do for a career.

    Do learn to use AI in some fashion, even if for shits and giggles.

    • ___@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      This. The technology is here to stay and will literally change the world. In a few years when the Sora and SD3 models are released and well understood, and desktop GPUs begin offering 24GB vram to midrange cards out of demand, it will be crazier than we can imagine. LLMs are already near human level with enough compute. As tech gets faster and commoditized, everyone becomes and artist and a programmer. Information will no longer be trusted, and digital verification technology will proliferate.

      Invest now.

      That and nuclear batteries capable of running pi like machines for decades. 1w is on the horizon by BetaVolt.

      • 🅿🅸🆇🅴🅻@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Unique style paintings will become even more valuable in the future. Generative AI only spews “art” based on previous styles it learned / was trained on. Everything will be even more rehashed than it is today (nod to Everything is a Remix). Having a painting made by an actual human hand on your wall will be more ego-boosting than an AI generated one.

        Sure, for general digital art (ie logos, game character design, etc) when uniqueness isn’t really mandatory, AI is a good, very cheap tool.

        As for the “everyone becomes a programmer” part… naah.

        • Rikudou_Sage@lemmings.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Having a painting made by an actual human hand on your wall will be more ego-boosting

          Nothing really changes, this has always been the case.

      • Nachorella@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I’m not sure why you’re being downvoted. I don’t think the current technology is going to replace programmers or artists any time soon (speaking as someone who works as an artist and programmer in a field that monitors ai and its uses) but I also acknowledge that my guess is as good as yours.

        I don’t think it’s going to replace artists because as impressive as the demos we all see are, inevitably, whenever I’ve done any thorough testing, every AI model fails at coming up with something new. It’s so held back by what it’s trained on, that to contemplate it replacing an artist - who are very capable of imagining new things - seems absurd to me.

        Same with programming - ask for something it doesn’t know about and it’ll lie and make something up and confidently proclaim it as truth. It can’t fact check itself and so I can only see it as a time saving tool for professionals and a really cool way for hobbyists to get results that were otherwise off the table.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 months ago

          I cant speak for certain about generating art, I’m no artist and my limit of experience there is playing around with stable diffusion, but it feels like its in the same place as LLMs for programming. Its incredibly impressive at first but once you’ve used it for a bit the flaws become obvious. It will be a very powerful tool for artists to use, just like LLMs are for programming, and will likely significantly decrease the time needed to produce something, but is nowhere near replacing a human entirely.

          • Nachorella@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            Yeah, for art it’s similar, you can get some really compelling results, but once tasked with creating something a bit too specific it ends up wasting your time more than anything.

            There’s definitely uses for it and it’s really cool, but I don’t think it’s as close to replacing professionals as some people think.

    • TherouxSonfeir@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I use “AI” when I work. It’s like having a really smart person who knows a bit about everything available 24/7 with useful responses. Sure, it’s not all right, but it usually leads me down the path to solving my problem a lot faster than I could with “Googling.” Remember Google? What a joke.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I think it‘s less of a really smart person and more of a very knowledgeable person with an inflated ego so you take everything they say with a grain of salt. Useful nonetheless.

        • Rikudou_Sage@lemmings.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I think a colleague of mine made a great comparison: It’s like having access to a thousand junior devs who can reply really fast.

    • JackFrostNCola@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Just being a stickler here but Electronics Engineers, not Electrical. Similar sounding but like the difference between a submarine captain and an airplane captain.

  • swayevenly@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I think the Jensen quote loosley implies we don’t need to learn a programming language but the logic was flimsy. Same goes for the author as they backtrack a few times. Not a great article in my opinion.

    • DudeDudenson@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Jensen’s just trying to ride the AI bubble as far as itll go, next hell tell you to forget about driving or studying

  • fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    As a developer building on top of LLMs, my advice is to learn programming architecture. There’s a shit ton of work that needs to be done to get this unpredictable non deterministic tech to work safely and accurately. This is like saying get out of tech right before the Internet boom. The hardest part of programming isn’t writing low level functions, it’s architecting complex systems while keeping them robust, maintainable, and expandable. By the time an AI can do that, all office jobs are obsolete. AIs will be able to replace CEOs before they can replace system architects. Programmers won’t go away, they’ll just have less busywork to do and instead need to work at a higher level, but the complexity of those higher level requirements are about to explode and we will need LLMs to do the simpler tasks with our oversight to make sure it gets integrated correctly.

    I also recommend still learning the fundamentals, just maybe not as deeply as you needed to. Knowing how things work under the hood still helps immensely with debugging and creating better more efficient architectures even at a high level.

    I will say, I do know developers that specialized in algorithms who are feeling pretty lost right now, but they’re perfectly capable of adapting their skills to the new paradigm, their issue is more of a personal issue of deciding what they want to do since they were passionate about algorithms.

    • gazter@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      In my comment elsewhere in the thread I talk about how, as a complete software noob, I like to design programs by making a flowchart first, and how I wish the flowchart itself was the code.

      It sounds like what I’m doing might be (super basic) programming architecture? Where can I go to learn more about this?