Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.
@raccoona_nongrata
Actually. It is necessary. The process of creativity is much much more a synergy of past consumption than we think.
It took 100,000 years to get from cave drawings to Leonard Da Vinci.
Yes we always find ways to draw, but the pinnacle of art comes from a shared culture of centuries.
Stable Diffusion, sitting on its own for 100,000 years or a million would not create art, that is the distinction.
A human could express themselves with art in some form or another having never been exposed to other human art. Whether you consider that art refined doesn’t really factor into the question.
@raccoona_nongrata
A machine will not unilaterally develop an art form, and develop it for 100,000 years.
Yes I agree with this.
However, they are not developing an art form now.
Nor did Monet, Shakespeare, or Beethoven develop an art form. Or develop it for 100,000 years.
So machines cannot emulate that.
But they can create the end product based on past creations, much as Monet, Shakespeare, and Beethoven did.
Sure, but those individuals are responsible for their proportional contribution to that 100,000 years, which can be a lot to a human being, sometimes a life’s work.
If you stopped feeding new data to Diffusion, it would not progress or advance the human timeline of art, it would just stagnate. It might have a broader scope than if you fed it cave drawings, but it would never contribute anything itself.
People don’t want their work and contribution scooped up by a machine that then shoves them aside with literally no compensation.
If we create a society where no one has to work, we can revisit the question, but that’s nowhere on the horizon.
@raccoona_nongrata
Actually this is how we are training some models now.
The models are separated, fed different versions of the source data, then we kick off a process of feeding them content that was created by the other models creating a loop. It has proven very effective. It is also the case that this generation of AI created content is the next generations training data, simply by existing. What you are saying is absolutely false. Generated content DOES have a lot of value as source data
No, humans create and develope styles in art from “mistakes” that AI would not continue pursuing. Because they personally like it or have a strange addiction to their own creative process. The current hand mistakes for example were perhaps one of the few interesting things AI has done…
Current AI models recreate what is most liked by the majority of people.