Music publishers sue happy in the face of any new technological development? You don’t say.
If an intern gives you some song lyrics on demand, do they sue the parents?
Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?
"I am sorry, I’m not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "
I dream of a future when we think of the benefit of humanity over the maintenance of our owners’ authoritarian control.
If an intern gives you some song lyrics on demand, do they sue the parents?
Uh— what? That analogy makes no sense. AI is trained off actual lyrics, which is why companies who create these models are at risk (they don’t own the data they’re feeding into the model.)
Also your comment is completely mixing Trademark and Copyright examples. It has nothing to do with brand names and everything to do with intellectual property.
In reality people learn how to write lyrics because they listen to songs. Nobody writes a song without listening to thousands of them and many human written songs are really similar to each other. Otherwise the music industry wouldn’t be littered with lawsuits. I don’t really see the difference.
Chatbots don’t have physical bodies that require food and shelter. So even if you could prove their creativity was identical to real human creativity and not a crude imitation more akin to assembling random collages, they still don’t deserve the same protections as real artists with physical bodies that need food and shelter.
Which isn’t even approaching the obvious retort that their creativity is a crude imitation of real creativity.
Copyright doesn’t exist because there’s some important moral value to the useful arts. It exists to keep food in bellies.
You’re bending over backwards to protect bots as deserving identical rights to humans. For what purpose should they have those rights? The only benefit to treating the bots this way is to ensure the rich tech oligarchs that already have undue power and influence in our society get even richer and get even more influence.
The LLMs don’t deserve or have any rights. They’re a tool that people can use. Just like reference material, spellcheckers, asset libraries or whatever else creatives use. As long as they don’t actually violate copyright in the classical sense of just copy pasting stuff the product people generate using them is probably as (un)original as a lot of art out there. And collages can be transformative enough to qualify for copyright.
As long as they don’t actually violate copyright in the classical sense of just copy pasting stuff…
As far as we know, that is exactly how they work. They are very, very complex systems for copying and pasting stuff.
And collages can be transformative enough to qualify for copyright
Sure, if they were made with human creativity they deserve the protections meant to keep creative humans alive. But who cares? They are not humans and thus do not get those protections.
That claim doesn’t prove your premise. I get that it feels clever, but it isn’t.
Just because they’re very good at reproducing information from highly pared down and compressed forms does not mean they are not reproducing information. If that were true, you wouldn’t be able to enforce copyright on a jpeg photo of a painting.
If it was a compression algorithm then it would be insanely efficient and that’d be the big thing about it. The simple fact is that they aren’t able to reproduce their exact training data so no, they aren’t storing it in a highly compressed form.
Despite what you would think by listening to some of the crap on the radio, the people writing song lyrics actually can think and have intent to express an idea. “AI” writing is glorified text prediction.
AI is trained off actual lyrics, which is why companies who create these models are at risk (they don’t own the data they’re feeding into the model.)
Nobody is “at risk” of anything here. You don’t have to own data to use data, just like you’re not liable for the content of an Internet page because it was downloaded to your browser’s cache.
Everybody who agrees with these lawsuits have a severe misunderstanding of how LLMs and other AI models work. They are large matrices of weights and numbers, not copies of the data they consume. The entire Stable Diffusion model is a 4GB file, trained from billions of images. It’s impossible to “copy” petabytes of images and somehow end up with a few gigabytes of numbers. The transformation is a lossy process, and its result does not fit the definition of copyright.
That doesn’t make it “not copyright Infringement”, that just makes it an efficient compression algorithm. With the right prompt, you can recover copies of the original.
I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.
Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.
Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.
This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.
Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.
Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.
Music publishers sue happy in the face of any new technological development? You don’t say.
If an intern gives you some song lyrics on demand, do they sue the parents?
Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?
"I am sorry, I’m not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "
I dream of a future when we think of the benefit of humanity over the maintenance of our owners’ authoritarian control.
Uh— what? That analogy makes no sense. AI is trained off actual lyrics, which is why companies who create these models are at risk (they don’t own the data they’re feeding into the model.)
Also your comment is completely mixing Trademark and Copyright examples. It has nothing to do with brand names and everything to do with intellectual property.
In reality people learn how to write lyrics because they listen to songs. Nobody writes a song without listening to thousands of them and many human written songs are really similar to each other. Otherwise the music industry wouldn’t be littered with lawsuits. I don’t really see the difference.
It may surprise you to learn that ‘people’ and 'not people’s are treated differently under the law
Where did I ever say that a stupid AI should get any rights to its own product?
That’s not what I meant by that. People should have the rights to the products they produce using the tools at their disposal.
Chatbots don’t have physical bodies that require food and shelter. So even if you could prove their creativity was identical to real human creativity and not a crude imitation more akin to assembling random collages, they still don’t deserve the same protections as real artists with physical bodies that need food and shelter.
Which isn’t even approaching the obvious retort that their creativity is a crude imitation of real creativity.
Copyright doesn’t exist because there’s some important moral value to the useful arts. It exists to keep food in bellies.
You’re bending over backwards to protect bots as deserving identical rights to humans. For what purpose should they have those rights? The only benefit to treating the bots this way is to ensure the rich tech oligarchs that already have undue power and influence in our society get even richer and get even more influence.
No, it exists to maintain profits of large corporations. Copyright, patents, and intellectual rights were created under the false pretense that it “protects the little person”, but these are lies told by the rich and powerful to keep themselves rich and powerful. Time and time again, we have seen how broken the patent system is, how it is impossible to not step on musical copyright, how Disney has extended copyrights to forever, and how the megacorporations have way more money than everybody else to defend those copyrights and patents. These people are not your friend, and their legal protections are not for you.
deleted by creator
The LLMs don’t deserve or have any rights. They’re a tool that people can use. Just like reference material, spellcheckers, asset libraries or whatever else creatives use. As long as they don’t actually violate copyright in the classical sense of just copy pasting stuff the product people generate using them is probably as (un)original as a lot of art out there. And collages can be transformative enough to qualify for copyright.
As far as we know, that is exactly how they work. They are very, very complex systems for copying and pasting stuff.
Sure, if they were made with human creativity they deserve the protections meant to keep creative humans alive. But who cares? They are not humans and thus do not get those protections.
They are physically unable to just copy paste stuff. The models are tiny compared to the training data, they don’t store it.
That claim doesn’t prove your premise. I get that it feels clever, but it isn’t.
Just because they’re very good at reproducing information from highly pared down and compressed forms does not mean they are not reproducing information. If that were true, you wouldn’t be able to enforce copyright on a jpeg photo of a painting.
If it was a compression algorithm then it would be insanely efficient and that’d be the big thing about it. The simple fact is that they aren’t able to reproduce their exact training data so no, they aren’t storing it in a highly compressed form.
Interns are also trained off of actual lyrics… cause radio.
Despite what you would think by listening to some of the crap on the radio, the people writing song lyrics actually can think and have intent to express an idea. “AI” writing is glorified text prediction.
Nobody is “at risk” of anything here. You don’t have to own data to use data, just like you’re not liable for the content of an Internet page because it was downloaded to your browser’s cache.
Everybody who agrees with these lawsuits have a severe misunderstanding of how LLMs and other AI models work. They are large matrices of weights and numbers, not copies of the data they consume. The entire Stable Diffusion model is a 4GB file, trained from billions of images. It’s impossible to “copy” petabytes of images and somehow end up with a few gigabytes of numbers. The transformation is a lossy process, and its result does not fit the definition of copyright.
That doesn’t make it “not copyright Infringement”, that just makes it an efficient compression algorithm. With the right prompt, you can recover copies of the original.
Clearly somebody who’s never used the software.
Finally someone who gets it
I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.
Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.
Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.
This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.
Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.
Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.