• Warfarin@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    10
    ·
    1 year ago

    I mean if AI keeps getting neutered with misinformation and refusing to use science because it might hurt someone’s feelings then that panel is certainly a reality

    • TeckFire@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Is AI really being handicapped that badly? I mean I knew certain topics were blocked off with the biggest public chat bots, but the machine learning data is still there, no?

      • Warfarin@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        13
        ·
        1 year ago

        Yeah it really is

        If it says anything that the current message doesn’t agree with it will scrap all of that data

        Feed it a bunch of scientific journals but then it says there are 2 genders? Scrap it all, no science for you

        Feed it crime data and it starts noticing patterns that we don’t like knowing? Scrap it all, data is misinformation

        That’s why I just laugh at people who say it’ll take over jobs

        • TeckFire@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          Well, to be fair, it is being fed data made by humans, so realistically its conclusions will be made likely following the ideas of past human inputs, no?

          And if those results are ideas that the creators would rather not progress with, shouldn’t they cull the outdated ones?

          I’m not speaking to any topics in particular, but the concept of AI creators shaping their creations. After all, if an AI told you the world was flat, wouldn’t you want to change its data to prevent it from producing results speaking so?

          • Warfarin@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            5
            ·
            1 year ago

            Yeah feeding it facts and statistics

            shouldn’t they cull the outdated ones?

            No they shouldn’t at all. Do you not learn from mistakes or understand how propaganda works?

            . After all, if an AI told you the world was flat,

            It won’t, it is fed scientific data. Which is what the modern woke don’t like, it may mention that we used to think the world was flat and how we discovered that not to be the case and people can ask all sorts of questions and it should be able to answer but if we did what they are doing and what you want because being short sighted is popular, is that it won’t be able to answer why it isn’t, why we thought it was and how we came to the scientifically backed conclusion that it is round.

            That’s the problem with modern woke politics. It’s just “our way or you’re a bigotaistphobe” no learning or backing up claims just “this is our truth”

            • TeckFire@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I understand what you’re trying to say, but you have to acknowledge the flaw in your argument.

              Unless you feed the AI nothing but raw mathematical data, (and therefore nothing to contextualize the data with, since humans live in a world outside of raw data) you will always have some bias in what you feed it.

              Even scientific data can have biased contexts attached to them in research papers. With this in mind, any data fed to a neural network will always have some bias based on what data was fed to it. Even if you ask it to produce scientific results, they will likely in some way mimic the methods of scientists who created source data material.

              That aside, from a political perspective, I gotta say I have no idea what you’re talking about, I don’t follow most modern political arguments if I can help it. With that said, every person who views the results of said data will have a political bias, so any results will further be “tainted” by whoever publishes the AI’s findings. Even with that, I still find that advancements in open source neural network algorithms are becoming more effective and more accessible for everyday people to use, so at some point, the only thing affecting the decisions will be sheer popularity, which is something on an entirely different scale.