Under US copyright law, only works created by humans can be copyrighted. Courts have (imho rightly) denied copyrights to AI-generated images.

My question is when do you think AI image tools cross from the realm of a “tool” (that, for example generates and fills in a background so an item can be removed from a photo) into the realm of “a human didn’t make this”?

What if an artist trains an AI so specialized it only makes their style of art? At what point do you think the images they create with it begin to count as their “work product”?

  • anarchost@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Are you talking about an artist exclusively running their images through an AI model until it is capable of regenerating images that look like it was created by them and have some semblance of intent?

    In order to get anything that looks remotely like what people want, I’m pretty sure they would have to upload millions of pictures of their own creation first. So most people just layer their images on top of the giant mash of ethically sketchy data that already was there.

    • curiousPJ@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I’m pretty sure they would have to upload millions of pictures of their own creation first.

      From the YouTube guides in generating your own Lora models… Naa just a couple reference poses and it’s ready to go.

      • model_tar_gz@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        LoRA models still have the underlying fully trained base model underneath; it is not a complete replacement or complete modification of the model weights.

    • whodatdair@lemmy.blahaj.zoneOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      Ahh, interesting. I almost admitted to not knowing that when I posted haha - I’m not hugely familiar with how image ai works, wasn’t sure it would be possible for one person to produce enough training material under current tech.

      Interesting to think about it as a hypothetical, though.