• Pennomi
      link
      fedilink
      English
      42 months ago

      That’s correct, and the paper supports this. But people don’t want to believe it’s true so they keep propagating this myth.

      Training on AI outputs is fine as long as you filter the outputs to only things you want to see.

    • @eleitl@lemm.ee
      link
      fedilink
      32 months ago

      How do you verify novel content generated by AI? How do you verify content harvested from the Internet to “be correct”?

    • Binette
      link
      fedilink
      22 months ago

      The issue is that A.I. always does a certain amount of mistakes when outputting something. It may even be the tiniest, most insignificant mistake. But if it internalizes it, it’ll make another mistake including the one it internalized. So on and so forth.

      Also this is more with scraping in mind. So like, the A.I. goes on the internet, scrapes other A.I. images because there’s a lot of them now, and becomes worse.