• Thorny_Insight@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    7 months ago

    The goal of AI research has almost always been to reach AGI. The bar for this has basically been human level intelligence because humans are generally intelligent. Once an AI system reaches “human level intelligence” you no longer need humans to develop it further as it can do that by itself. That’s where the threat of singularity, i.e. intelligence explosion comes from meaning that any further advancements happens so quickly that it gets away from us and almost instantly becomes a superintelligence. That’s why many people think that “human level” artificial intelligence is a red herring as it doesn’t stay that way but for a tiny moment.

    What’s ironic about the Turing Test and LLM models like GPT4 is that it fails the test by being so competent on wide range of fields that you can know for sure that it’s not a human because a human could never posses that amount of knowledge.

    • 8ace40@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I was thinking… What if we do manage to make the AI as intelligent as a human, but we can’t make it better than that? Then, the human intelligence AI will not be able to make itself better, since it has human intelligence and humans can’t make it better either.

      Another thought would be, what if making AI better is exponentially harder each time. So it would be impossible to get better at some point, since there wouldn’t be enough resources in a finite planet.

      Or if it takes super-human intelligence to make human-intelligence AI. So the singularity would be impossible there, too.

      I don’t think we will see the singularity, at least in our lifetime.