• Trailblazing Braille Taser@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 months ago

    In this case, the models are given part of the text from the training data and asked to predict the next word. This appears to work decently well on the pre-2023 internet as it brought us ChatGPT and friends.

    This paper is claiming that when you train LLMs on output from other LLMs, it produces garbage. The problem is that the evaluation of the quality of the guess is based on the training data, not some external, intelligent judge.

    • andallthat@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      ah I get what you’re saying., thanks! “Good” means that what the machine outputs should be statistically similar (based on comparing billions of parameters) to the provided training data, so if the training data gradually gains more examples of e.g. noses being attached to the wrong side of the head, the model also grows more likely to generate similar output.