Jensen Huang says kids shouldn’t learn to code — they should leave it up to AI.::At the recent World Government Summit in Dubai, Nvidia CEO Jensen Huang made a counterintuitive break with tech leader wisdom by saying that programming is no longer a vital skill due to the AI revolution.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Well, a lot of maths can be done with a calculator. They don’t need to learn to actually understand the maths unless either they actually want to, or they’re going into something like engineering.

      • berg@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        In many engineering professions you really need to understand the underlying math to have a chance in hell to interpret the results correctly. Just because you get a result doesn’t mean you get an answer.

      • Skvlp@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I disagree. They need to understand math, but not being able to calculate math problems in their head.

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          As an autist i can it agree more, understanding something is a requirement for me to do well.

          So much of my struggles in school where based on using formulas without knowing why or whats behind them, not understanding the broader practical implications and intended goals of assignments, i was just told to just do them, the way it is asked with the formulas i was given (or was forced to remember). Lost motivation, my will to live even, spiraled and crashed hard in the end.

          I got better, now i am sitting here scribbling all kinds of math in my little black book as a way to relax. I dont watch “tv” but i wont miss a kurtzegesagt or a veritasium.

          I inherently love science, in major contrast to my later high school grades.

          • Skvlp@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Absolutely. If one just “does as told” without understanding without understanding there is no way of knowing if one is lost or not.

            I’ve had similar experiences in school myself, and they truly are detrimental to both learning and the joy of learning.

            I’m glad you are doing better, and thanks for sharing your story :)

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Absolutely. The calculator is a tool to help you solve a problem. If you don’t understand the problem, then at best you can’t confirm if the answer is correct or not, and at worst the entire exercise is completely lost on you.

          The same applies to LLMs. Sure you can get them to spit out code, but unless you understand the code it might be tough to verify that it does what you want. Further, if the code needs adapting (as it often does) then you are shit out of luck if you don’t understand it.

          Sure you can ask the LLM to make changes, but the moment something goes wrong in the prompt you have an error sitting there polluting all future output.

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Indeed. I’ve been watching a number of evaluations of different LLMs, where people give it a set of problems and then evaluate the results. The number of times I’ve seen “Well it got that wrong, but if we let it re-evaluate it, it gets it right”. If that’s the case, the model is useless. You have to know the right answer before you can ask the model for an answer because the answer you’ll get can’t be trusted.

            Might as well flip a coin.

            • Dojan@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              7 months ago

              Yeah. I was tasked with evaluating LLMs for software dev at my company last year. Tried a few solutions and tools, and various workflows from just using it as a crutch to basically instructing the LLM to make the application. The former was rarely necessary (but sometimes helpful) and the latter was ridiculously cumbersome.

              You need to be specific, and leave no room for interpretation, because the moment you do the latter it’ll start making stuff up that doesn’t necessarily fit in with the spec, and while you can correct that, that’s tedious in and of itself, and once it’s already had the idea it’ll often have a hard time letting go of it.

              I also had several cases where it outright ignored provided context. That was even more frustrating because then it made assumptions that I’d already proven to be false.

              The best use cases I got from it was

              • Explaining unclear code
              • Writing clear documentation (it was really good at this)
              • Rubberducking

              Essentially, it was a great helper, but a horrendous developer. Felt more like I was tutoring it than anything else.

              • Skvlp@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                I haven’t seen anyone mention rubberducking or documentation or understanding code as use cases for AI before, but those are truly useful and meaningful advantages. Thanks for bringing that to my attention :)

                • Dojan@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  There are definitely ways in which LLMs and imaging models are useful. Hell I’ve been playing around with vocal synthesis for years, SynthV’s AI models are amazing, so even for music there’s use cases. The problem is big corporations just fucking it up. Rampant theft, no compensation for the original creators, and then they’re sitting on the models like dragons. OpenAI needs to rename themselves, preferably years ago, because there’s nothing open about them.

                  The way I see it, the way SynthV (and VOCALOID prior to that) works is great; you hire a vocalist with the express purpose of making a model out of their voice. They know what they’re getting into, and are getting compensated for it. Then there are licenses and such on these models. In some cases, like those produced by Eclipsed Sounds, anyone that uses a model to create a song gets decently free reign. In others, like the Bushiroad models, you are fairly restricted in what you can do with them.

                  Meaning the original artist has a say. It’s why some models, like Cangqiong, will never get AI updates; the voice provider’s wishes matter.

                  Using computer generated stuff as a crutch in the creation process is perfectly fine I feel, but outright trying to replace humans with “AI” is a ridiculous notion.

      • pathief@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Scientific calculators can do a ton of stuff, but they’re all useless if you don’t know anything about math. If you don’t know anything about the subject, you can’t formulate the right questions.

      • Annoyed_🦀 @monyet.cc
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        You need to learn what is addition subtraction multiplication division and also how it works to do anything meaningful with it in calculator…

      • HeavyDogFeet@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        This is objectively stupid. There are tonnes of things you learn in maths that are useful for everyday life even if you don’t do the actual calculations by hand.

      • yildolw@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        They aren’t going to catch the typo or order of operations error they made on their calculator if they don’t understand the math

      • captainlezbian@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        And that’s why people don’t understand that I’m not magic. Seriously, no you should know how to do math, understand how it works. Just like how as an engineer I need to understand how stories work.

    • I_Has_A_Hat@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I mean, we aren’t exactly teaching kids how to hand calculate trig anymore. Sin, Cos, and Tan operations are pretty much exclusively done with a calculator and you’d be hard pressed to find anyone who graduated in the last 25 years who knows any other way to do it.

      • 257m@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I haven’t graduated high school yet and even I know how to calculate sin and cos with the taylor series maclurin expansion. I am still in grade 11 and I assume they would be teaching it next year when I take my calculus class? Do they not teach it anymore?

      • silasmariner@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        For a younger age range you might be right, but in general that’s not true; the approximation via a Fourier series is definitely something we teach kids. We don’t generally expect people to be able to actually calculate it at the speed of a calculator, sure, but at least it’s tested whether they can derive the expansion.

  • Pat12@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I know some Gen Z recent grads who use chatgpt to write their code.

    back in my day, we had to write our code ourselves…

  • Psaldorn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I asked ChatGPT to show me how to do some Godot4.2 C# stuff the other day as I transition from Unity, it was 70% incorrect.

    Good times. (It was probably right for an older version, but I told it the actual version)

      • leftzero@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Not with LLM’s it won’t. They’re a dead end. In their rush for short term profits so called AI companies have poisoned the well; the only way to “improve” an LLM is to make it larger, but most of the content in the internet is now produced by these fancy autocomplete engines, so there’s not only no new and better content to train them on, but since they can’t really generate anything they haven’t been trained on doing so on LLM generated text will only propagate and maximise any errors, like making photocopies of photocopies or JPEGs of JPEGs.

        It’s all a silly game of telephone now; a circular LLM centipede fed on its own excrement, distilling its own garbage to the point of maximum uselessness.

        • I_Has_A_Hat@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Mhmm, give it another year or so. You are like people in the 90’s saying while the internet may be useful for emails, that’s the limit of what it can accomplish.

          Forgive me if your claims of a glass ceiling ring hollow considering all the previous glass ceilings people have claimed about AI.

          “An AI will never be able to write in a human like way” Check.

          “An AI will never be able to generate a coherent image” Check

          “An AI generated image could never be better than a real artist” Check

          “AI will never be able to generate a whole video without messing it up” Check

          I’m not sure how you can just flippantly say it’s not going to advance or progress in any more meaningful way. This is still a very new technology and it’s already shattered the limits of what people thought was possible.

          • leftzero@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            Oh, AI is going to progress. LLMs, which are merely applied statistics and no more AI than Markov chains, are not, at least in any significant way (sure, they might get bigger, which won’t really change them qualitatively, but as I pointed out there’s no unpoisoned content to train them on, so making them bigger is moot anyway, other than as a means to temporarily inflate the bubble).

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    I think my take is, he might be right. That is that by the time kids become adults we may have AGI and we’ll either be enslaved or have much less work to do (for better or worse).

    But AI as it is now, relies on input from humans. When left to take their own output as input, they go full Alabama (sorry Alabamites) with their output pretty quickly. Currently, they work as a tool in tandem with a human that knows what they’re doing. If we don’t make a leap from this current iteration of AI, then he’ll be very very wrong.

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      If you think AGI is anywhere close to what we have now, you haven’t been using any critical thinking skills when interacting with language models.

      • r00ty@kbin.life
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        I don’t. We’re talking about the next generation of people here. Do pay attention at the back.

        • bionicjoey@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Okay but what I’m saying is that AGI isn’t the logical progression of anything we have currently. So there’s no reason to assume it will be here in one generation.

          • r00ty@kbin.life
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            I’d tend to agree. I said we may have that, and then he might have a point. But, if we don’t, he’ll be wrong because current LLMs aren’t going to (I think at least) get past the limitations and cannot create anything close to original content if left to feed on their own output.

            I don’t think it’s easy to say what will be the situation in 15-20 years. The current generation of AI is moving ridiculously fast. Can we sidestep to AGI? I don’t know the answer, probably people doing more work in this area have a better idea. I just know on this subject it’s best not to write anything off.

            • bionicjoey@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              The current generation of AI is moving ridiculously fast.

              You’re missing my point. My point is that the current “AI” has nothing to do with AGI. It’s an extension of mathematical and computer science theory that has existed for decades. There is no logical link between the machine learning models of today and true AGI. One has nothing to do with the other. To call it AI at all is actually quite misleading.

              Why would we plan for something if we have no idea what the time horizon is? It’s like saying “we may have a Mars colony in the next generation, so we don’t need to teach kids geography”

              • r00ty@kbin.life
                link
                fedilink
                arrow-up
                0
                ·
                7 months ago

                Why would we plan for something if we have no idea what the time horizon is? It’s like saying “we may have a Mars colony in the next generation, so we don’t need to teach kids geography”

                Well, I think this is the point being made quite a bit in this thread. It’s general business level hyperbole, really. Just to get a headline and attention (and it seems to have worked). No-one really knows at which point all of our jobs will be taken over.

                My point is that in general, the current AI models and diffusion techniques are moving forward at quite the rate. But, I did specify that AGI would be a sidestep out of the current rail. I think that there’s now weight (and money) behind AI and pushing forward AGI research. Things moving faster in one lane right now can push investment into other lanes and areas of research. AI is the buzzword every company wants a piece of.

                I’m not as confident as Mr Nvidia is, but with this kind of money behind it, AGI does have a shot of happening.

                In terms of advice regarding training for software development, though. What I think for sure is that the current LLMs and offshoots of the techniques will develop, better frameworks for businesses to train them on their own material will become commonplace, I think one of the biggest consultancy growth areas will be in producing private models for software (and other) companies.

                The net effect of that is going to mean they will just want fewer (better) engineers to make use of the AI to produce more, with less people. So, even without AGI the demand for software developers and engineers is going to be lower I think. So, is it as favourable an industry to train for now as it was for the previous generations? Quite possibly it’s not.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    “I have a foreboding of an America in my children’s or grandchildren’s time…when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness…”

    Carl Sagan, Astrologist/Horposcopist from ancient times.

  • hubobes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I use LLMs daily to code but the more complex the issue is I try to solve the more work I have to do to get it to actually produce what I need. I feel like at some point we will get to where UML failed…it will just be easier to write the code.

    But I don’t like writing long Linq queries or Angular templates or whatever, it does that quite well (70% of the time it is 70% correct or so). So it takes over the part of coding I dislike.

    So no just being able to write code might be unnecessary but that’s like 10% of my day.

      • Blemgo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Linus Torvals talk at the Aalto University. Specifically a segment where he talks about how hard it is to work with Nvidia when it comes to the Linux kernel.

  • ???@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    And I say I don’t even know this person and he should just stfu and leave those kids alone.

      • ???@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Good for him. I like Nvidia and use one, but I have the rest of his company to thank for that.

        I think for me it was a combination of:

        < Name of person I don’t know > says < big unhinged sweeping generalization > for < reason that makes no sense to anyone in the field >

        My first instinct is not to click stuff like this altogether. I also think that anyone trying to preach what kids should or shouldn’t do is already in the wrong automatically by assuming they have any say in this without a degree in pedagogy.

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          He’s also obviously biased since the more people use LLMs and the like the more money he gets.

          It’s a bit like “lions think gazelles should be kept in their enclosure”.

  • Annoyed_🦀 @monyet.cc
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I thought coding skill is mostly about logical thinking, problem solving, and idea implementation instead of merely writing code?

    Even then, who’s gonna code to improve the AI in a meaningful way if everyone not learning to code? What if AI write their own update badly and no one correct it, and then the badly written AI write an even worst version of it? I think in biology we called that cancer.

    • OleoSaccharum@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Coding, like writing scientific papers, or novels, is only about randomly generating strings, silly human.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Coding, like writing scientific papers, or novels, is only about randomly generating strings

        See also, litigation, medical diagnoses, creating art that evokes an emotional reaction in its audience, etc.

        It turns out that virtually all human advancement and achievement comes down to simply figuring out what the next most likely token is based on what’s already been typed.

        (/j incase it’s not obvious)

    • Artyom@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It is, but you should note that the CEO of NVidia is a manager, and software developers haven’t been able to sufficiently convey your point to managers for about 50 years, so we’re certainly not going to get any better at it in the next few years.