Artificial intelligence systems like ChatGPT could soon run out of what keeps making them smarter — the tens of trillions of words people have written and shared online.

A new study released Thursday by research group Epoch AI projects that tech companies will exhaust the supply of publicly available training data for AI language models by roughly the turn of the decade – sometime between 2026 and 2032.

Comparing it to a “literal gold rush” that depletes finite natural resources, Tamay Besiroglu, an author of the study, said the AI field might face challenges in maintaining its current pace of progress once it drains the reserves of human-generated writing.

In the short term, tech companies like ChatGPT-maker OpenAI and Google are racing to secure and sometimes pay for high-quality data sources to train their AI large language models – for instance, by signing deals to tap into the steady flow of sentences coming out of Reddit forums and news media outlets.

In the longer term, there won’t be enough new blogs, news articles and social media commentary to sustain the current trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private — such as emails or text messages — or relying on less-reliable “synthetic data” spit out by the chatbots themselves.

  • olicvb@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Something I don’t quite get is: do they need that much material? Once they train with all of humanity’s texts, why would it need more? If I wanted to learn syntax and all that is required for proper spelling I wouldn’t even need 1/1000th (less even) of what they’ve fed it already. I get it’s not just spelling, but if we don’t have any more to feed it shouldn’t it already know all there is to answer our questions?

    Makes me wonder if it’s not the amount of data that’s the issue, but more of how it’s built.