• @Godort@lemm.ee
    link
    fedilink
    278 months ago

    So far, the only thing AI has shown to be pretty good at is summerizing a large amount of data, and even then it cant be fully trusted to not make mistakes.

    • @h3ndrik@feddit.de
      link
      fedilink
      15
      edit-2
      8 months ago

      Hmm, I think summarization is a bad example. I’ve read quite some AI summaries that miss the point, sum up to a point where the simplification makes sth wrong or the AI added things or paraphrased and made things at least ambiguous. Even with the state of the art tech. Especially if the original texts were condensed or written by professionals. Like scientific papers or good news articles…

      What I think works better are tasks like translating text. That works really well. Sometimes things like rewording text. Or the style-transfer the image generators can do. That’s impressive. Restoring old photos, coloring them or editing something in/out. I also like the creativity they provide me with. They can come up with ideas, flesh out my ideas.

      I think AI is an useful tool for tasks like that. But not so much for summarization or handling factual information. I don’t see a reason why further research coudn’t improve on that… But at the current state it’s just the wrong choice of tools.

      And sure, it doesn’t help that people hype AI and throw it at everything.

    • MxM111
      link
      fedilink
      8
      edit-2
      8 months ago

      AI is a tool, and one need to learn using it. But even today, just two years from introduction to the public (LLMs) it has lot’s of uses. This is one of the fastest adoption of nearly any technology.