• cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    It’s rather hard to open source the model when you trained it off a bunch of copyrighted content that you didn’t have permission to use.

    • chebra@mstdn.io
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      @cmnybo @marvelous_coyote That’s… not how it works. You wouldn’t see any copyrighted works in the model. We are already pretty sure even the closed models were trained on copyrighted works, based on what they sometimes produce. But even then, the AI companies aren’t denying it. They are just saying it was all “fair use”, they are using a legal loophole, and they might win this. Basically the only way they could be punished on copyright is if the models produce some copyrighted content verbatim.

        • chebra@mstdn.io
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          @ReakDuck Yup, and that’s a much better avenue to fight against the AI companies. Because fundamentally, this is almost impossible to avoid in the ML models. We should stop complaining about how they scraped copyrighted content, this complaint won’t succeed until that legal loophole is removed. But when they reproduce copyrighted content, that could be fatal. And this applies also to reproducing GPL code samples by copilot for example.

    • flamingmongoose@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      BERT and early versions of GPT were trained on copyright free datasets like Wikipedia and out of copyright books. Unsure if those would be big enough for the modern ChatGPT types