• THCDenton@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It was pretty good for a while! They lowered the power of it like immortan joe. Do not be come addicted to AI

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.

    It’s also been helpful at work with some random database type stuff.

    But it definitely gets stuff wrong. A lot of stuff.

    The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It’s incredibly useful for learning. ChatGPT was what taught me to unlearn, essentially, writing C in every language, and how to write idiomatic Python and JavaScript.

      It is very good for boilerplate code or fleshing out a big module without you having to do the typing. My experience was just like yours; once you’re past a certain (not real high) level of complexity you’re looking at multiple rounds of improvement or else just doing it yourself.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        It is very good for boilerplate code

        Personally I find all LLMs in general not that great at writing larger blocks of code. It’s fine for smaller stuff, but the more you expect out of it the more it’ll get wrong.

        I find they work best with existing stuff that you provide. Like “make this block of code more efficient” or “rewrite this function to do X”.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Exactly. And for me, being in middle age, it’s a big help with recalling syntax. I generally know how to do stuff, but need a little refresher on the spelling, parameters, etc.

    • Downcount@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

      Or it get stuck in an endless loop of two different but wrong solutions.

      Me: This is my system, version x. I want to achieve this.

      ChatGpt: Here’s the solution.

      Me: But this only works with Version y of given system, not x

      ChatGpt: <Apology> Try this.

      Me: This is using a method that never existed in the framework.

      ChatGpt: <Apology> <Gives first solution again>

      • mozz@mbin.grits.dev
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago
        1. “Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn’t work)”
        2. Goto 1
      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.

      • UberMentch@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”

        • brbposting@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.

          It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.

          *[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      This is because all LLMs function primarily based on the token context you feed it.

      The best way to use any LLM is to completely fill up it’s history with relevant context, then ask your question.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I worked on a creative writing thing with it and the more I added, the better its responses. And 4 is a noticeable improvement over 3.5.

    • tristan@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I was recently asked to make a small Android app using flutter, which I had never touched before

      I used chatgpt at first and it was so painful to get correct answers, but then made an agent or whatever it’s called where I gave it instructions saying it was a flutter Dev and gave it a bunch of specifics about what I was working on

      Suddenly it became really useful…I could throw it chunks of code and it would just straight away tell me where the error was and what I needed to change

      I could ask it to write me an example method for something that I could then easily adapt for my use

      One thing I would do would be ask it to write a method to do X, while I was writing the part that would use that method.

      This wasn’t a big project and the whole thing took less than 40 hours, but for me to pick up a new language, setup the development environment, and make a working app for a specific task in 40 hours was a huge deal to me… I think without chatgpt, just learning all the basics and debugging would have taken more than 40 hours alone

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    This is the best summary I could come up with:


    In recent years, computer programmers have flocked to chatbots like OpenAI’s ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year.

    That’s a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air.

    For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT’s attempt to answer them.

    The team also performed a linguistic analysis of 2,000 randomly selected ChatGPT answers and found they were “more formal and analytical” while portraying “less negative sentiment” — the sort of bland and cheery tone AI tends to produce.

    The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn’t catch AI-generated mistakes at 39 percent.

    The study demonstrates that ChatGPT still has major flaws — but that’s cold comfort to people laid off from Stack Overflow or programmers who have to fix AI-generated mistakes in code.


    The original article contains 340 words, the summary contains 199 words. Saved 41%. I’m a bot and I’m open source!

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I wonder if the AI is using bad code pulled from threads where people are asking questions about why their code isn’t working, but ChatGPT can’t tell the difference and just assumes all code is good code.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      No need to defend it.

      Either it’s value is sufficient that businesses can make money by implementing it and it gets used, or it isn’t.

      I’m personally already using it to make money, so I suspect it’s going to stick around.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    What’s especially troubling is that many human programmers seem to prefer the ChatGPT answers. The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn’t catch AI-generated mistakes at 39 percent.

    Why is this happening? It might just be that ChatGPT is more polite than people online.

    It’s probably more because you can ask it your exact question (not just search for something more or less similar) and it will at least give you a lead that you can use to discover the answer, even if it doesn’t give you a perfect answer.

    Also, who does a survey of 12 people and publishes the results? Is that normal?

    • brbposting@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I have 13 friends who are researchers and they publish surveys like that all the time.

      (You can trust this comment because I peer reviewed it.)

  • eerongal@ttrpg.network
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Worth noting this study was done on gpt 3.5, 4 is leagues better than 3.5. I’d be interested to see how this number has changed

  • Melkath@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Developing with ChatGPT feels bizzarely like when Tony Stark invented a new element with Jarvis’ assistance.

    It’s a prolonged back and forth, and you need to point out the AIs mistakes and work through a ton of iterations to get something that is close enough that you can tweak it and use, but it’s SO much faster than trawling through Stack Overflow or hoping someone who knows more than you can answer a post for you.

    • elgordio@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Yeah if you treat it is a junior engineer, with the ability to instantly research a topic, and are prepared to engage in a conversation to work toward a working answer, then it can work extremely well.

      Some of the best outcomes I’ve had have needed 20+ prompts, but I still arrived at a solution faster than any other method.

      • Melkath@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        In the end, there is this great fear of “the AI is going to fully replace us developers” and the reality is that while that may be a possibility one day, it wont be any day soon.

        You still need people with deep technical knowledge to pilot the AI and drive it to an implemented solution.

        AI isnt the end of the industry, it has just greatly sped up the industry.

  • Epzillon@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    I worked for a year developing in Magento 2 (an open source e-commerce suite which was later bought up by Adobe, it is not well maintained and it just all around not nice to work with). I tried to ask some Magento 2 questions to ChatGPT to figure out some solutions to my problems but clearly the only data it was trained with was a lot of really bad solutions from forum posts.

    The solutions did kinda work some of the times but the way it was suggesting it was absolutely horrifying. We’re talking opening so many vulnerabilites, breaking many parts of the suite as a whole or just editing database tables. If you do not know enough about the tools you are working with implementing solutions from ChatGPT can be disasterous, even if they end up working.

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      “Self driving cars will make the roads safer. They won’t be drunk or tired or make a mistake.”

      Self driving cars start killing people.

      “Yeah but how do they compare to the average human driver?”

      Goal post moving.

    • hayes_@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Why would we compare it against an average coder?

      ChatGPT wants to be a coding aid/reference material. A better baseline would be the top rated answer for the question on stackoverflow or whether the answer exists on the first 3 Google search results.

  • Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    For the upteenth time - an llm just puts words together, it isn’t a magic answer machine.

  • haui@lemmy.giftedmc.com
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    The interesting bit for me is that if you ask a rando some programming questions they will be 99% wrong on average I think.

    Stack overflow still makes more sense though.