• Cyclohexane@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 months ago

    A worrying number of my colleagues use AI blindly. Like the kind where you just press tab and not even look. Those who look spend a second before moving on.

    They call me anti-AI, even though I’ve used chatGPT since day 1. Those LLMs are great tools, but I am just paranoid to use it in that manner. I rather it explain to me how to do the thing instead of doing the thing (at which it is even better).

    EDIT: Typo

    • anti-idpol action@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Also one really good practice from pre-Copilot era still holds, that many new users of copilot, my past self included might forget: don’t write a single line of code without knowing it’s purpose. Another thing is that while it can save a lot of time on boilerplate, you need to stop and think whenever it’s using your current buffer’s contents to generate several lines of very similar code whether it wouldn’t be wiser to extract the repetitive code into a method. Because while it’s usually algorithmically correct, good design still remains largely up to humans.

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 months ago

    I think this is extremely important:

    Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities.

    Bad programmers + AI = bad code

    Good programmers + AI = good code

      • Aurenkin@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        What do you mean? Sounds to me like any other tool, it takes skill to use it well. Same as stack overflow, built in code suggestions or IDE generated code.

        Not to detract from the usefulness of it just in terms of the fact that it requires knowledge to use well.

        • ericjmorey@programming.devOP
          link
          fedilink
          arrow-up
          0
          ·
          9 months ago

          As someone currently studying machine learning thoery and how these models are built, I’m explaining that built into the models at their core are functions that amplify the bias of the training data by identifying and using mathematical associations within the training data to create output. Because of that design, a naive approach to its use would result in amplified bias of not only the training data but also the person using the tool.

      • Daxtron2@startrek.website
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        eh, I’ve known lots of good programmers who are super stuck in their ways. Teaching them to effectively use an LLM can help break you out of the mindset that there’s only one way to do things.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          I think that’s one of the best use cases for AI in programming; exploring other approaches.

          It’s very time-consuming to play out how your codebase would look like if you had decided differently at the beginning of the project. So actually comparing different implementations is very expensive. This incentivizes people to stick to what they know works well. Maybe even more so when they have more experience, which means they really know this works very well, and they know what can go wrong otherwise.

          Being able to generate code instantly helps a lot in this regard, although it still has to be checked for errors.

  • Irdial@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    In a shock to literally nobody… Jokes aside, I am looking forward to reading this paper