• ricecake@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    My point is that it’s not an unnecessary bias, it’s different results for different queries.

    Yes, I am going out of my way to say that treating an issue with a 1 in 3 incidence rate the same as one with a 1 in 10 incidence rate isn’t a necessary outcome to ensure an automated system has.

    Providing relevant information is literally their reason for existence, so I’m not sure that I agree that it’s not the point. There isn’t some person auditing the results; the system sees the query and then sees what content people who make the query engage with.
    I don’t see the system recognizing that a threshold of people with queries similar to one engage with domestic abuse resources and tripping a condition that gives them special highlighting, and a people with queries similar to another engaging with dysfunctional relationship resources more often is a difference that needs correction.

    I’m not sure what to tell you about different results. I searched logged out, incognito, and in Firefox.

    • Saik0@lemmy.saik0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      There isn’t some person auditing the results

      Those top bar things… are literally audited answers from Google. They’re outside the normal search results and moves the actual result completely in the UI. Someone at google literally hard coded that anything returning results relating to womens domestic violence should present that banner.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        That’s not how it works. They code a confidence threshold that the relevant result will have to do with domestic violence in general. That’s why it provides the same banner when the result is more unambiguously relevant to domestic violence.

        None of this is the same as a person auditing the results.

        • uis@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Can we just agree whatever metric they choosed is biased?

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      So you’re saying it’s the google algorithm trying to put stuff up that’s most often the final destination for those search terms.

      That makes sense.

      That aside, do you think that, if a human were to consciously decide what goes at the top for these searches, that the man should receive a little lecture on empathy while a woman should be presented with a hotline to get help?