• voracitude@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    On the one hand, generative AI doesn’t have to give deterministic answers i.e. it won’t necessarily generate the same answer even when asked the same question in the same way.

    But on the other hand, editing the HTML of any page to say whatever you want and then taking a screenshot of it is very easy.

      • thegreatgarbo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        If you read the arstechnica article Google is correcting these errors on the fly so the search results can change rapidly.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It could also be A/B testing, so not everyone will have the AI running in general

        • halcyoncmdr@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Google runs passive A/B testing all the time.

          If you’re using a Google service there’s a 99% chance you’re part of some sort of internal test of changes.

        • Otter@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Wouldn’t they be? They could measure how likely it is that someone clicks on the generated link/text

          • credo@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Just because you click on it that doesn’t make it accurate. More importantly, that text isn’t “clickable”, so they can’t be measuring raw engagement either.

            • RvTV95XBeo@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Just because you click on it that doesn’t make it accurate.

              Given the choice between clicks/engagement and accuracy, is pretty clear Google’s for the former is what got us into this hell hole.

            • IllNess@infosec.pub
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              What this would measure is how long you would stay on the page without scrolling. Less scrolling means more time looking at ads.

              This is the influence of Prabhakar Raghavan.

      • lucas@fitt.au
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        @RecursiveParadox @voracitude it absolutely has become a meme, there are (or were) a bunch of repeatable results.

        Google is probably whack-a-mole’ing them now, because “google’s AI search results are trying to kill people” has entered the collective consciousness.

        • vimdiesel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I have no doubt some of their AI answers have antivax and injecting bleach recommendations from all over the web as part of their training regime.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Technically, generative AI will always give the same answer when given the same input. But, what happens is a “seed” is mixed in to help randomize things, that way it can give different answers every time even if you ask it the same question.

      • jyte@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        What happened to my computers being reliable, predictable, idempotent ? :'(

          • jyte@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Technically they still are, but since you don’t have a hand on the seed, practically they are not.

            • QuadratureSurfer@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              OK, but we’re discussing whether computers are “reliable, predictable, idempotent”. Statements like this about computers are generally made when discussing the internal workings of a computer among developers or at even lower levels among computer engineers and such.

              This isn’t something you would say at a higher level for end-users because there are any number of reasons why an application can spit out different outputs even when seemingly given the “same input”.

              And while I could point out that Llama.cpp is open source (so you could just go in and test this by forcing the same seed every time…) it doesn’t matter because your statement effectively boils down to something like this:

              “I clicked the button (input) for the random number generator and got a different number (output) every time, thus computers are not reliable or predictable!”

              If you wanted to make a better argument about computers not always being reliable/predictable, you’re better off pointing at how radiation can flip bits in our electronics (which is one reason why we have implemented checksums and other tools to verify that information hasn’t been altered over time or in transition). Take, for instance, the example of what happened to some voting machines in Belgium in 2003: https://www.businessinsider.com/cosmic-rays-harm-computers-smartphones-2019-7

              Anyway, thanks if you read this far, I enjoy discussing things like this.