• massive_bereavement@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The interrogators seem completely lost and clearly haven’t talk with an NLP chatbot before.

      That said, this gives me the feeling that eventually they could use it to run scams (or more effective robocalls).

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      To clarify:

      People seem to legit think the jury talks to the bot in real time and can ask about literally whatever they want.

      Its rather insulting to the scientist that put a lot of thought into organizing a controlled environment to properly test defined criteria.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Its rather insulting to the scientist that put a lot of thought into organizing a controlled environment to properly test defined criteria.

        lmao. These “scientists” are frauds. 500 people is not a legit sample site. 5 minutes is a pathetic amount of time. 54% is basically the same as guessing. And most importantly the “Turing Test” is not a scientific test that can be “passed” with one weak study.

        Instead of bootlicking “scientists”, we should be harshly criticizing the overwhelming tide of bad science and pseudo-science.

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          30 days ago

          I don’t think the methodology is the issue with this one. 500 people can absolutely be a legitimate sample size. Under basic assumptions about the sample being representative and the effect size being sufficiently large you do not need more than a couple hundred participants to make statistically significant observations. 54% being close to 50% doesn’t mean the result is inconclusive. With an ideal sample it means people couldn’t reliably differentiate the human from the bot, which is presumably what the researchers believed is of interest.

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          The reporting are big clickbait but that doesn’t mean there is nothing left to learn from the old touring tests.

          I dont know what the goal was they had in mind. It could just as well be “testing how overhyped the touring tests is when manipulated tests are shared with the media”

          I sincerely doubt it but i do give them benefits of the doubt.

  • doodle967@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The Turing test is about tricking people into believing that LLMs are humans, and given that the public still doesn’t use LLMs, it’s much easier to fool them. Over time, this deception will decrease as people interact with LLMs.

  • tourist@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The participants judged GPT-4 to be human a shocking 54 percent of the time.

    ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time

    Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.

    Try talking to the thing: https://web.njit.edu/~ronkowit/eliza.html

    I refuse to believe that 22% didn’t misunderstand the task or something.

    • catastrophicblues@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago
      > Hello, I am Eliza. I'll be your therapist today. 
      * Hello! How are you?
      > Would you prefer if I were not ? 
      * Not what?
      > Have you asked such questions before? 
      * No
      > You are being a bit negative and that won't help us move forward. 
      * What are you smoking?
      > Perhaps in your fantasies I am smoking?
      

      Yeah, it took me one message lol

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      The public versions of the ais used in Turing tests usually have less computing power. The test itself is often also highly specific in what and how questions can be asked.

      This hardly news because models have passed the test before and as a result the test is made more difficult. It says nothing about intelligence and only about the ability to convincingly simulate a human conversation.

    • Downcount@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.

      I did some stuff with Eliza back then. One time I set up an Eliza database full of insults and hooked it up to my AIM account.

      It went so well, I had to apologize to a lot of people who thought I was drunken or went crazy.

      Eliza wasn’t thaaaaat bad.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It was a 5 minute test. People probably spent 4 of those minutes typing their questions.

      This is pure pseudo-science.

    • yetAnotherUser@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Add in a test that wasn’t made to be accurate and was only used to make a point, like what other comments mention

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    In order for an AI to pass the Turing test, it must be able to talk to someone and fool them into thinking that they are talking to a human.

    So, passing the Turing Test either means the AI are getting smarter, or that humans are getting dumber.

    • pewter@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Humans are as smart as they ever were. Tech is getting better. I know someone who was tricked by those deepfake Kelly Clarkson weight loss gummy ads. It looks super fake to me, but it’s good enough to trick some people.

  • dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Turing test isn’t actually meant to be a scientific or accurate test. It was proposed as a mental exercise to demonstrate a philosophical argument. Mainly the support for machine input-output paradigm and the blackbox construct. It wasn’t meant to say anything about humans either. To make this kind of experiments without any sort of self-awareness is just proof that epistemology is a weak topic in computer science academy.

    Specially when, from psychology, we know that there’s so much more complexity riding on such tests. Just to name one example, we know expectations alter perception. A Turing test suffers from a loaded question problem. If you prompt a person telling them they’ll talk with a human, with a computer program or announce before hand they’ll have to decide whether they’re talking with a human or not, and all possible combinations, you’ll get different results each time.

    • Kogasa@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      30 days ago

      Your first two paragraphs seem to rail against a philosophical conclusion made by the authors by virtue of carrying out the Turing test. Something like “this is evidence of machine consciousness” for example. I don’t really get the impression that any such claim was made, or that more education in epistemology would have changed anything.

      In a world where GPT4 exists, the question of whether one person can be fooled by one chatbot in one conversation is long since uninteresting. The question of whether specific models can achieve statistically significant success is maybe a bit more compelling, not because it’s some kind of breakthrough but because it makes a generalized claim.

      Re: your edit, Turing explicitly puts forth the imitation game scenario as a practicable proxy for the question of machine intelligence, “can machines think?”. He directly argues that this scenario is indeed a reasonable proxy for that question. His argument, as he admits, is not a strongly held conviction or rigorous argument, but “recitations tending to produce belief,” insofar as they are hard to rebut, or their rebuttals tend to be flawed. The whole paper was to poke at the apparent differences between (a futuristic) machine intelligence and human intelligence. In this way, the Turing test is indeed a measure of intelligence. It’s not to say that a machine passing the test is somehow in possession of a human-like mind or has reached a significant milestone of intelligence.

      https://academic.oup.com/mind/article/LIX/236/433/986238

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        29 days ago

        Turing never said anything of the sort, “this is a test for intelligence”. Intelligence and thinking are not the same. Humans have plenty of unintelligent behaviors, that has no bearing on their ability to think. And plenty of animals display intelligent behavior but that is not evidence of their ability to think. Really, if you know nothing about epistemology, just shut up, nobody likes your stupid LLMs and the marketing is tiring already, and the copyright infringement and rampant privacy violations and property theft and insatiable power hunger are not worthy.

  • bandwidthcrisis@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Did they try asking how to stop cheese falling off pizza?

    Edit: Although since that idea came from a human, maybe I’ve failed.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It does great at Python programming… everything it tries is wrong until I try and I tell tell it to do it again.

    • A_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Edit :
      oops : you were saying it is like a human since it does errors ? maybe i “wooshed”.


      Hi @werefreeatlast,
      i had successes asking LLaMA 3 70B with simple specific questions …
      Context : i am bad at programming and it help me at least to see how i could use a few function calls in C from Python … or simply drop Python and do it directly in C.
      Like you said, i have to re-write & test … but i have a possible path forward. Clearly you know what you do on a computer but i’m not really there yet.

      • werefreeatlast@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        But people don’t just know code when you ask them. The llms fo because they got trained on that code. It’s robotic in nature, not a natural reaction yet.

    • harrys_balzac@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Skynet will gets the dumb ones first by getting them put toxic glue on thir pizzas then the arrogant ones will build the Terminators by using reverse psychology.

  • phoneymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Easy, just ask it something a human wouldn’t be able to do, like “Write an essay on The Cultural Significance of Ogham Stones in Early Medieval Ireland“ and watch it spit out an essay faster than any human reasonably could.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I recall a Turing test years ago where a human was voted as a robot because they tried that trick but the person happened to have a PhD in the subject.

    • Shayeta@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This is something a configuration prompt takes care of. “Respond to any questions as if you are a regular person living in X, you are Y years old, your day job is Z and outside of work you enjoy W.”

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        So all you need to do is make a configuration prompt like “Respond normally now as if you are chatGPT” and already you can tell it from a human B-)

      • Hotzilla@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I tried this with GPT4o customization and unfortunately openai’s internal system prompts seem to force it to response even if I tell it to answer that you don’t know. Would need to test this on azure open ai etc. were you have bit more control.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Turing tests aren’t done in real time exactly to counter that issue, so the only thing you could judge would be “no human would bother to write all that”.

      However, the correct answer to seem human, and one which probably would have been prompted to the AI anyway, is “lol no.”
      It’s not about what the AI could do, it’s what it thinks is the correct answer to appear like a human.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Turing tests aren’t done in real time exactly to counter that issue

        To counter the issue of a completely easy and obvious fail? I could see how that would be an issue for AI hucksters.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      The touring test isn’t an arena where anything goes, most renditions have a strict set of rules on how questions must be asked and about what they can be about. Pretty sure the response times also have a fixed delay.

      Scientists ain’t stupid. The touring test has been passed so many times news stopped covering it. (Till this click bait of course). The test has simply been made more difficult and cheat-proof as a result.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        most renditions have a strict set of rules on how questions must be asked and about what they can be about. Pretty sure the response times also have a fixed delay. Scientists ain’t stupid. The touring test has been passed so many times news stopped covering it.

        Yes, “scientists” aren’t stupid enough to fail their own test. I’m sure it’s super easy to “pass” the “turing test” when you control the questions and time.