• t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    This is just an extension of the larger issue of people not understanding how AI works, and trusting it too much.

    AI is and has always been about exchanging accuracy for speed. It excels in cases where slow, methodical work is not given sufficient time already, because the accuracy is already low(er) as a result (e.g. overworked doctors examining CT scans).

    But it should never be treated as the final word on something; it’s the first ~70%.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I feel like I’ve been screaming this for so long and you’re someone who gets it. AI stuff right now is pretty neat. I’ll use it to get jumping off points and new ideas on how to build something.

      I would never ever push something written by it to production without scrutinizing the hell out of it.

  • Auzy@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    This isn’t even a debate lol…

    Stuff like CoPilot is awesome at making code that looks right, but contains subtle wrong variable names it’s self-created, or bad algorithms.

    And that’s not the big issue.

    The big issue is when you get distracted for 5 mins, you come back, and you forget that you’ve been working through that block of AI generated code (which looks correct), so you forget to check the rest of it, and it makes it into the source code, before testing later, only to realise its screwed because its AI generated code.

    The other big issue, is that its only a matter of time until people start to get fed up, and start feeding these systems dodgy data to de-train them and make them worse / with backdoors.

  • Artyom@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Anyone who’s going to copy and paste code that they don’t understand is inherently a security vulnerability.

  • TheFriendlyArtificer@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    My argument is thus:

    LLMs are decent at boilerplate. They’re good at rephrasing things so that they’re easier to understand. I had a student who struggled for months to wrap her head around how pointers work, two hours with GPT and the ability to ask clarifying questions and now she’s rockin’.

    I like being able to plop in a chunk of Python and say, “type annotate this for me and none of your sarcasm this time!”

    But if you’re using an LLM as a problem solver and not as an accelerator, you’re going to lack some of the deep understanding of what happens when your code runs.

    • jherazob@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      The thing is that this is NOT what the marketers are selling, they’re not selling this as “Buy access to our service so that your products will be higher quality”, they’re selling this as “this will replace many of your employees”. Which it can’t, it’s very clear by now that it just can’t.