• Marxism-Fennekinism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    https://time.com/6247678/openai-chatgpt-kenya-workers/

    To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

    OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

    The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

    […]

    Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

    […]

    One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

    […]

    That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

    Gonna leave this here.

      • Marxism-Fennekinism@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        The last quote danced around it but if the implication is that they were seeking out and collecting CSAM which is a sex crime to access, possess and distribute, why the fuck are the boards of both companies not in prison and on the sex offender list?!

        I mean, I know why, but

        • SacrificedBeans@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!

        • Clbull@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?

          • Strawberry@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

            This is the quote in question. They’re talking about images

        • smooth_tea@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

    • GenesisJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff

    • Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

      Ugh.