Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • FaceDeer
    link
    fedilink
    51 year ago

    The company pulled down his work, fed it to their AI, then sold the AI as their product.

    If you read the article, not even that is what’s going on here. Stability AI:

    • Removed Rutkowski’s art from their training set.
    • Doesn’t sell their AI as a product.
    • Someone else added Rutkowski back in by training a LoRA on top of Stability’s AI.
    • They aren’t selling their LoRA as a product either.

    So none of what you’re objecting to is actually happening. All cool? Or will you just come up with some other thing to object to?

    • Pulse
      link
      51 year ago

      But they did.

      (I’m on mobile so my formatting is meh)

      They put his art in, only when called out did they remove it.

      Once removed, they did nothing to prevent it being added back.

      As for them selling the product, or not, at this point, they still used the output of his labor to build their product.

      That’s the thing, everyone trying to justify why it’s okay for these companies to do it keep leaning on semantics, legal definitions or “well, back during the industrial revolution…” to try and get around the fact that what these companies are doing is unethical. They’re taking someone else’s labor, without compensation or consent.