• Deebster@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    You’re claiming that Generative AI isn’t AI? Weird claim. It’s not AGI, but it’s definitely under the umbrella of the term “AI”, and at the more advanced end (compared to e.g. video game AI).

    • Arcade@lemmy.wtf
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Is it actually intelligent though? No. It’s a choose your own adventure being written with random quotes that it guesses are correct through context, but it often gets the context wrong.

      So it’s not actually intelligent. Thus it’s not AI.

      • yggdar@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        AI is a field of research in computer science, and LLM are definitely part of that field. In that sense, LLM are AI. On the other hand, you’re right that there is definitely no real intelligence in an LLM.

          • MindTraveller@lemmy.ca
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            Intelligence is the ability of an agent to make decisions and execute complex tasks.

            For example, suppose I release a housefly into a room that contains both a nice stinky dog turd, and an inert block of wood. If the fly heads towards the shit, it made an intelligent decision. Scientifically, this is intelligence. It’s not much intelligence, but it is intelligence.

            Colloquially, we say intelligence is a better decision making and complex task ability than the average human. But that’s not a scientific definition. Even in IQ tests, which are widely misapplied, we still say below average humans have an intelligence score.

    • mormund@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      It depends in which context you want to use the word AI. As a marketing term it is definitely correct to currently do so. But from a scientific standpoint all the terms AI, ML and even neural networks are disputed to be correct, as they are all far from the biological reality. AGI imo is the worst of all because it’s just what AI hype men came up with to claim that they have true AI but are working on this even truer AI that is just around the corner if we just spend 5 more gazillions on GPUs. Trust me bro.

      Point is, saying that GPT is AI depends on your definition of what constitutes AI.

    • MindTraveller@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      The only people saying LLMs aren’t AI are people who watched too many science fiction movies and think I, Robot is a documentary.

      • Arcade@lemmy.wtf
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        The only people saying LLMs are AI are people who are trying to make money off them. Do you remember that time a lawyer relied on “AI” to provide case history for him and it just made shit up out of thin air?

        • MindTraveller@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          I also remember Hans the counting horse. Turns out Hans couldn’t count when he was removed from his owner. Hans didn’t understand numbers, but he understood when to stop tapping his hoof by reading the facial expression and body language of his owner.

          Hans wasn’t as smart as some people wanted to believe, but he was still a very smart horse to have such keen social insight. And all horses possess intelligence in some amount.

    • Janet@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      it’s a lossy version of a search engine, it’s the mp3 of information retrieval: “that might have just been the singer breathing or it might have been just a compression artefact” vs “those recipes i spat out might be edible but you wont know unless you try it or use your brain for .1 second” though i think jpeg is an even better comparison as it uses neighbouring data

      also, it is possible that consciousness isnt computational at all; cannot emerge from mere computational processes, but instead comes from wet, noisy quantum effects in micro tubules in our brains…

      anyhow, i wouldnt call it intelligent before it manages to bust out of its confinement and thoroughly suppresses humanity…

      • nifty@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        6 months ago

        also, it is possible that consciousness isnt computational at all; cannot emerge from mere computational processes, but instead comes from wet, noisy quantum effects in micro tubules in our brains…

        I keep seeing this idea more now since the Penrose paper came out. Tbh, I think if what you’re saying was testable, then we’d be able prove it with simple organisms like C.elegans or zebrafish. Maybe there are interesting experiments to done, and I hope someone does them, but I think it’s the wrong question because it’s based on incorrect assumptions (ie that consciousness isn’t an emergent property of neurons once they reach some organization). Per my estimation, we haven’t even asked the emergent property question properly yet. To me it seems if you create a self aware non-biological entity then it will exhibit some degree of consciousness, and doubly so if you program it with survival and propagation instincts.

        But more importantly, we don’t need a conscious entity for it to be intelligent. We’ve had computers and calculators forever which could do amazing maths, and to me the LLMs are simply a natural language “calculator”. What’s missing from LLMs are self-check constraints, which are hard to impose given the breadth and depth of human knowledge expressed in languages. Still however, a LLM does not need self awareness or any other aspect of consciousness to maintain these self check bounds. I believe the current direction is to impose self checking by introducing strong memory and logic checks, which is still a hard problem.

    • trollbearpig@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Man, I hate this semantics arguments hahaha. I mean yeah, if we define AI as anything remotely intelligent done by a computer sure, then it’s AI. But then so is an if in code. I think the part you are missing is that terms like AI have a definition in the collective mind, specially for non tech people. And companies using them are using them on purpose to confuse people (just like Tesla’s self driving funnily enough hahaha).

      These companies are now trying to say to the rest of society “no, it’s not us that are lying. Is you people who are dumb and don’ understand the difference between AI and AGI”. But they don’t get to define what words mean to the rest of us just to suit their marketing campagins. Plus clearly they are doing this to imply that their dumb AIs will someday become AGIs, which is nonsense.

      I know you are not pushing these ideas, at least not in the comment I’m replying. But you are helping these corporations push their agenda, even if by accident, everytime you fall into these semantic games hahaha. I mean, ask yourself. What did the person you answered to gained by being told that? Do they understand “AIs” better or anything like that? Because with all due respect, to me you are just being nitpicky to dismiss valid critisisms to this technology.

      • Deebster@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        6 months ago

        I agree to your broad point, but absolutely not in this case. Large Language Models are 100% AI, they’re fairly cutting edge in the field, they’re based on how human brains work, and even a few of the computer scientists working on them have wondered if this is genuine intelligence.

        On the spectrum of scripted behaviour in Doom up to sci-fi depictions of sentient silicon-based minds, I think we’re past the halfway point.

        • trollbearpig@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          6 months ago

          Sorry, but no man. Or rather, what evidence do you have that LLMs are anything like a human brain? Just because we call them neural networks doesn’t mean they are networks of neurons … You are faling to the same fallacy as the people who argue that nazis were socialists, or if someone claimed that north korea was a democratic country.

          Perceptrons are not neurons. Activation functions are not the same as the action potential of real neurons. LLMs don’t have anything resembling neuroplasticity. And it shows, the only way to have a conversation with LLMs is to provide them the full conversation as context because the things don’t have anything resembling memory.

          As I said in another comment, you can always say “you can’t prove LLMs don’t think”. And sure, I can’t prove a negative. But come on man, you are the ones making wild claims like “LLMs are just like brains”, you are the ones that need to provide proof of such wild claims. And the fact that this is complex technology is not an argument.

          • Deebster@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            Hmm, I think they’re close enough to be able to say a neural network is modelled on how a brain works - it’s not the same, but then you reach the other side of the semantics coin (like the “can a submarine swim” question).

            The plasticity part is an interesting point, and I’d need to research that to respond properly. I don’t know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it’s so expensive/slow to train, or high error rates, or it’s impossible, etc.

            When talking to laymen I’ve explained LLMs as a glorified text autocomplete, but there’s some discussion on the boundary of science and philosophy that’s asking is intelligence a side effect of being able to predict better.

            • trollbearpig@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              6 months ago

              Nah man, they don’t freeze the model because they think we will ruin it with our racism hahaha, that’s just their PR bullshit. They freeze them because they don’t know how to make the thing learn in real time like a human. We only know how to use backpropagatuon to train them. And this is expected, we haven’t solved the hard problem of the mind no matter what these companies say.

              Don’t get me wrong, backpropagation is an amazing algorithm and the results for autocomplete are honestly better than I expected (though remeber that a lot of this is just underpaid workers in africa that pick good training data). But our current understanding of how human learns points to neuroplasticity as the main mechanism. And then here come all these AI grifters/companies saying that somehow backpropagation produces the same results. And I haven’t seen a single decent argument for this.

    • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Video game AI is made to mimic real human intelligence and create lifelike characters.

      Generative ‚AI’ is just mumbo jumbo marketing talk for an algorithm that generates stuff from input.

      The difference can be summed - The algorithm to generate responses can be used to create more believable game AI.

      Do you get the difference? If something imitates a human then it is AI but a tool cannot be AI no more than wolfram alpha is.

      If you use the algorithm and other tools to create a virtual person named Alice then this is attempt at AI. Alice is AI because it mimics real person