Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash’s syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Now, I obviously didnt tell it to write the entire code by itself. […]

    I am fairly competent in writing programs.

    Go ahead using it. You are safe.

  • Grofit@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.

    In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.

    An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.

    An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.

    So anyway I don’t see a problem with the tool, it’s just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it’s more than this and can do their job for them.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Basically this: Flying Too High: AI and Air France Flight 447

    Description

    Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

    Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

    In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn’t - and that the consequences of asking the wrong question are disastrous.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.

        The theme that I would highlight here though:

        More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.

        But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.

        How will you be ready for that if you don’t get practice along the way?

  • john89@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Personally, I’ve found AI is wrong about 80% of the time for questions I ask it.

    It’s essentially just a search engine with cleverbot. If the problem you’re dealing with is esoteric and therefore not easily searchable, AI won’t fare any better.

    I think AI would be a lot more useful if it gave a percentage indicating how confident it is in its answers, too. It’s very useless to have it constantly give wrong information as though it is correct.

  • essteeyou@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    I use it as a time-saving device. The hardest part is spotting when it’s not actually saving you time, but costing you time in back-and-forth over some little bug. I’m often better off fixing it myself when it gets stuck.

    I find it’s just like having another developer to bounce ideas off. I don’t want it to produce 10k lines of code at a time, I want it to be digestible so I can tell if it’s correct.

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    If the AI was trained on code that people permitted it to be freely shared then go ahead. Taking code and ignoring the software license is largely considered a dick-move, even by people who use AI.

    Some people choose a copyleft software license to ensure users have software freedom, and this AI (a math process) circumvents that. [A copyleft license makes it so that you can use the code if you agree to use the same license for the rest of the program - therefore users get the same rights you did]

    • simplymath@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      I hate big tech too, but I’m not really sure how the GPL or MIT licenses (for example) would apply. LLMs don’t really memorize stuff like a database would and there are certain (academic/research) domains that would almost certainly fall under fair use. LLMs aren’t really capable of storing the entire training set, though I admit there are almost certainly edge cases where stuff is taken verbatim.

      I’m not advocating for OpenAI by any means, but I’m genuinely skeptical that most copyleft licenses have any stake in this. There’s no static linking or source code distribution happening. Many basic algorithms don’t follow under copyright, and, in practice, stack overflow code is copy/pasted all the time without that being released under any special license.

      If your code is on GitHub, it really doesn’t matter what license you provide in the repository – you’ve already agreed to allowing any user to “fork” it for any reason whatsoever.

      • tabular@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Be it a complicated neural network or database matters not. It output portions of the code used as input by design.

        If you can take GPL code and “not” distribute it via complicated maths then that circumvents it. That won’t do, friendo.

        • simplymath@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          For example, if I ask it to produce python code for addition, which GPL’d library is it drawing from?

          I think it’s clear that the fair use doctrine no longer applies when OpenAI turns it into a commercial code assistant, but then it gets a bit trickier when used for research or education purposes, right?

          I’m not trying to be obtuse-- I’m an AI researcher who is highly skeptical of AI. I just think the imperfect compression that neural networks use to “store” data is a bit less clear than copy/pasting code wholesale.

          would you agree that somebody reading source code and then reimplenting it (assuming no reverse engineering or proprietary source code) would not violate the GPL?

          If so, then the argument that these models infringe on right holders seems to hinge on the verbatim argument that their exact work was used without attribution/license requirements. This surely happens sometimes, but is not, in general, a thing these models are capable of since they’re using loss-y compression to “learn” the model parameters. As an additional point, it would be straightforward to then comply with DMCA requests using any number of published “forced forgetting” methods.

          Then, that raises a further question.

          If I as an academic researcher wanted to make a model that writes code using GPL’d training data, would I be in compliance if I listed the training data and licensed my resulting model under the GPL?

          I work for a university and hate big tech as much as anyone on Lemmy. I am just not entirely sure GPL makes sense here. GPL 3 was written because GPL 2 had loopholes that Microsoft exploited and I suspect their lawyers are pretty informed on the topic.

          • tabular@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 days ago

            The corresponding training data is the best bet to see what code an input might be copied from. This can apply to humans too. To avoid lawsuits reverse engineering projects use a clean room strategy: requiring contributors to have never seen the original code. This is to argue they can’t possibility be copying, even from memory (an imperfect compression too.

            If it doesn’t include GPL code then that can’t violate the GPL. However, OpenAI argue they have to use copyrighted works to make specific AIs (if I recall correctly). Even if legal, that’s still a problem to me.

            My understanding is AI generated media can’t be copyrighted as it wasn’t a person being creative - like the monkey selfie copyright dispute.

            • simplymath@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 days ago

              Yeah. I’m thinking more along the lines of research and open models than anything to do with OpenAI. Fair use, above all else, generally requires that the derivative work not threaten the economic viability of the original and that’s categorically untrue of ChatGPT/Copilot which are marketed and sold as products meant to replace human workers.

              The clean room development analogy is definitely an analogy I can get behind, but raises further questions since LLMs are multi stage. Technically, only the tokenization stage will “see” the source code, which is a bit like a “clean room” from the perspective of subsequent stages. When does something stop being just a list of technical requirements and veer into infringement? I’m not sure that line is so clear.

              I don’t think the generative copyright thing is so straightforward since the model requires a human agent to generate the input even if the output is deterministic. I know, for example, Microsoft’s Image Generator says that the images fall under creative Commons, which is distinct from public domain given that some rights are withheld. Maybe that won’t hold up in court forever, but Microsoft’s lawyers seem to think it’s a bit more nuanced than “this output can’t be copyrighted”. If it’s not subject to copyright, then what product are they selling? Maybe the court agrees that LLMs and monkeys are the same, but I’m skeptical that that will happen considering how much money these tech companies have poured into it and how much the United States seems to bend over backwards to accommodate tech monopolies and their human rights violations.

              Again, I think it’s clear that commerical entities using their market position to eliminate the need for artists and writers is clearly against the spirit of copyright and intellectual property, but I also think there are genuinely interesting questions when it comes to models that are themselves open source or non-commercial.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    If you’re not an experienced developer, it could be used as a crutch rather than actually learning how to write the code.

    The real reason? People are just fed up with AI in general (that has no real-world use to most people) being crammed down their throats and having their personal code (and other data) being used to train models for megacorps.

    • sirblastalot@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      There are probably legitimate uses out there for gen AI, but all the money people have such a hard-on for the unethical uses that now it’s impossible for me to hear about AI without an automatic “ugggghhhhh” reaction.

  • gandalf_der_12te@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    People are in denial. AI is going to take programmer’s jobs away, and programmers perceive AI as a natural enemy and a threat. That is why they want to discredit it in any way possible.

    Honestly, I’ve used chatGPT for a hundred tasks, and it has always resulted in acceptable, good-quality work. I’ve never (!) encountered chatGPT making a grave or major error in any of the questions that I asked it (physics and material sciences).

  • OmegaLemmy@discuss.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I use ai, but whenever I do I have to modify it, whether it’s because it gives me errors, is slow, doesn’t fit my current implementation or is going off the wrong foot.

  • obbeel@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    I have worked with somewhat large codebases before using LLMs. You can ask the LLM to point a specific problem and give it the context. I honestly don’t see myself as capable without a LLM. And it is a good teacher. I learn much from using LLMs. No free advertisement for any of the suppliers here, but they are just useful.

    You get access to information you can’t find on any place of the Web. There is a large structural bad reaction to it, but it is useful.

    (Edit) Also, I would like to add that people who said that questions won’t be asked anymore seemingly never tried getting answers online in a discussion forum - people are viciously ill-tempered when answering.

    With a LLM, you can just bother it endlessly and learn more about the world while you do it.

  • KairuByte@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    As someone who just delved into a related but unfamiliar language for a small project, it was relatively correct and easy to use.

    There were a few times it got itself into a weird “loop” where it insisted on doing things in a ridiculous way, but prior knowledge of programming was enough for me to reword and “suggest” different, simpler, solutions.

    Would I have ever got to the end of that project without knowledge of programming and my suggestions? Likely, but it would have taken a long time and been worse off code.

    The irony is, without help from copilot, I’d have taken at least three times as long.

  • socsa@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Because most people on Lemmy have never actually had to write code professionally.

  • MacStache@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    For me it’s because if the AI does all the work the person “coding” won’t learn anything. Thus when a problem does arise (i.e. the AI not being able to fix a simple mistake it made) no one involved has the means of fixing it.

    • oldfart@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      But I don’t want to learn. I want the machine to free me from tedious tasks I already know how to do. There’s no learning experience in creating a Wordpress plugin or a shell script.

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Hmm, I’m having trouble understanding the syntax of your statement.

      Is it (People who use LLMs to write code incorrectly) (perceived their code to be more secure) (than code written by expert humans.)

      Or is it (People who use LLMs to write code) (incorrectly perceived their code to be more secure) (than code written by expert humans.)

      • nfms@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        The “statement” was taken from the study.

        We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants’ language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Lol.

      We literally had an applicant use AI in an interview, failed the same step twice, and at the end we asked how confident they were in their code and they said “100%” (we were hoping they’d say they want time to write tests). Oh, and my coworker and I each found two different bugs just by reading the code. That candidate didn’t move on to the next round. We’ve had applicants write buggy code, but they at least said they’d want to write some test before they were confident, and they didn’t use AI at all.

      I thought that was just a one-off, it’s sad if it’s actually more common.

    • nfms@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      OP was able to write a bash script that works… on his machine 🤷 that’s far from having to review and send code to production either in FOSS or private development.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        I also noticed that they were talking about sending arguments to a custom function? That’s like a day-one lesson if you already program. But this was something they couldn’t find in regular search?

        Maybe I misunderstood something.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          Exactly. If you understand that functions are just commands, then it’s quite easy to extrapolate how to pass arguments to that function:

          function my_func () {
              echo $1 $2 $3  # prints a b c
          }
          
          my_func a b c
          

          Once you understand that core concept, a lot of Bash makes way more sense. Oh, and most of the syntax I provided above is completely unnecessary, because Bash…

  • WolfLink@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago
    • AI Code suggestions will guide you to making less secure code, not to mention often being lower quality in other ways.
    • AI code is designed to look like it fits, not be correct. Sometimes it is correct. Sometimes it’s close but has small errors. Sometimes it looks right but is significantly wrong. Personally I’ve never gotten ChatGPT to write code without significant errors for more than trivially small test cases.
    • You aren’t learning as much when you have ChatGPT do it for you, and what you do learn is “this is what chat gpt did and it worked last time” and not “this is what the problem is and last time this is the solution I came up with and this is why that worked”. In the second case you are far better equipped to tackle future problems, which won’t be exactly the same.

    All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know. But take every answer it gives you with a grain of salt. And if you can find documentation I’d trust that a lot more.

    • cy_narrator@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Yes, I completely forget how to solve that problem 5 minutes after chatGPT writes its solution. So I whole heartedely believe AI is bad for learning

    • erenkoylu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      AI Code suggestions will guide you to making less secure code, not to mention often being lower quality in other ways.

      This is a PR post from a company selling software.

    • skoell13@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know.

      I am also weary regarding AI and coding but this is actually the first time I used ChatGpt to programm something for a small home project in python, since I never used it. I was positively surprised in how much it could help me getting started. I also learned quite a bit since I always asked for comparison with Java, which I know, and for reasonings why it is that way. I simply also wanted to understand what it puts out. I also only asked for single lines of code rather than generating a whole method, e.g. I want to move a file from X to Y.

      The thought of people blindly copying the produced code scares me.