Reddit is notorious to responding to financial incentives. In the past they would ban communities only when they became toxic to advertisers due to overwhelming negative publicity. During those purges, they would often throw in some leftist subs to prevent the user-base political average from shifting leftward, but the purges were never proactive.
I think we’ve entered a new era where Reddit is no longer as concerned about which subs may scare advertisers, and are more concerned about which subs generate the kind of content that is valuable to LLM training. If I were training the next version of ChatGPT, I would be alarmed if it spontaneously invited me to masturbate with it, or prompts for images of a “battle station” resulted in images of walls full of nude women.
Reddit is notorious to responding to financial incentives. In the past they would ban communities only when they became toxic to advertisers due to overwhelming negative publicity. During those purges, they would often throw in some leftist subs to prevent the user-base political average from shifting leftward, but the purges were never proactive.
I think we’ve entered a new era where Reddit is no longer as concerned about which subs may scare advertisers, and are more concerned about which subs generate the kind of content that is valuable to LLM training. If I were training the next version of ChatGPT, I would be alarmed if it spontaneously invited me to masturbate with it, or prompts for images of a “battle station” resulted in images of walls full of nude women.
It seems like they’re worse about it now that they’ve IPOed. Or maybe that was just in the lead up to the IPO.
I would hope that people training AI models would be selective about which subs to include or exclude.