• Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      It’s because of how the generative models are created and how they’re censored.

      At it’s basic level, what a generative model does is take input data, break it into pieces, assign values to those bits based on neighbouring bits. It creates a model of which words are used together frequently in which context.

      But that kind of model isn’t human-readable, it’s a giant multi-dimensional cloud of numbers and connections, not actual code. You can change the inputs used to create the model, but that means you have to manually filter all the inputs and that’s not realistic either and will probably skew your model, possibly into uselessness.

      So, you have to either censor the input or the output. You don’t usually want to censor input, because there are all sorts of non-damaging questions to ask about Tiananmen square, and its very easy to dodge. So, you censor the output instead, that’s the “harm” after all.

      You let the model generate a reply and then go see if it uses certain terms or specific bits of info and remove them, replacing it with a canned reply.

      Which means we don’t have to trick the generative model, just the post-fact filter. And since generative models can be persuaded to change their style and form (sometimes into less-readable, more prosaic, less defined terms), it becomes very very hard to censor it effectively.