The existing industry that’s popped up around LLMs has conveniently ignored that what these models are doing may have been illegal the whole time and a lot of the experts knew it. This is why it’s so important for folks to realize that the industry is not just thin wrappers around ChatGPT (and that interesting applications of this technology are largely being pushed out by the lowest hanging fruit). If this is ruled as not fair use then the whole industry will basically disappear overnight and we’ll have to rebuild it from scratch either with a new business model that pays authors or as open source/crowd sourced models (probably both). All that said we’re almost certainly better off. Open AI may have kicked off the most recent “gold rush” but their methods have been terrible for both the industry at large and for further development of the tech.
It always should have had the right business model where they paid for this access for AI training. They knew it was wrong but in their rush to be known they decided it was better to take without asking and then ask for forgiveness later. Regardless what happens now, people have already made a name for themselves swindling the likes of Microsoft out of it and will have long well-paying careers from it.
It seems like it was almost necessary to go through this phase for the sake of developing the tech. Doesn’t a lot of CS research uses web crawling algorithms to gather data without identifying that the information is licensed for such use? What about the fediverse? it remains unclear what the copyright and licensing will be should it come into question. There is no EULA to access fedi, just a set of open protocols.
I seem to remember NYT suing Google years ago for effectively the same thing. Google copies all NYT articles into it’s index, then sells ads for people to search for that copyrighted information.
This is a fair point with regards to a handful of companies (Microsoft, Google, Meta) but there will still be an immediate loss in quality as they go back to basics on their data pipelines. Given how long they’ve spent playing catch up in this space, I suspect progress will be pretty slow from there
Yes, which is why I have trouble understanding what you wrote earlier: “If this is ruled as not fair use then the whole industry will basically disappear overnight and we’ll have to rebuild it from scratch either with a new business model that pays authors[…]”.
Why would the NYT pay the authors again when their archive is used for AI training?
I don’t think NYT contributors should expect a payday out of this but the precedent set may mean that they could expect some royalties for future work that they own outright. The precedent is really the important part here and this will definitely not be the only suit.
Ok. Then it’s not about authors, but about copyright owners. Bit misleading to talk about authors, then.
FWIW it wouldn’t work. The NYT and other newspapers have their whole archives to sell. A few months of a daily newspaper is more than even someone like Stephen Kind has published in his entire life. It’s not even worth negotiating about such a tiny amount of writing. At best, you could do like with stock photography. They upload their texts to some website and accept whatever terms are offered. It might be a good business for some middle-men.
A prolific amateur might find it a welcome bit of extra cash. But the story doesn’t stop there.
The extra costs must be passed on to the user. You transfer wealth from the public to a few large-scale owners, aka rich people. And since these AIs are text generators, you can expect that actual authors will bear the brunt of that.
I really don’t understand your argument. The best case scenario here is that LLMs become easily accessible and are largely unmonetized. That is OpenAI does not sell usage of the model nor are the models trained on things like news articles but instead look more like the OpenAssistant dataset (no relation to OpenAI).
Instead LLMs are strictly a conversational interface for interacting with arbitrary systems. My understanding of the limitations of this technology (I work in this space) means that’s the only thing we can ever hope for them to do in a resource efficient way. OpenAI and co have largely tried to obfuscate this fact for the purpose of maintaining our reliance on them for things that should be happening locally.
Edit: jk I’m gaslighting you because I’m a corporate plant. Trickle down trickle down Ronald Reagan is God
Edit 2: To add a little bit of context, OpenAI’s business model currently consists of burning money and generating hype. A ruling against them would destroy them financially as the there’s no way theyd be able to pay for all of their training data on top of the money they’re already using.
The NYT as a company is much closer to its authors than AI is to its authors. When it exercises copyright, the owners of those copyrights are the NYT, but the authors are the … Well, authors. You’re right that a victory means newspapers get a lot more money.
… But would that be a bad thing? If newspapers become more profitable again, maybe we can see a resurgence of local papers and more reliable news. Instead of MSNBC and Fox and CNN, various papers could be our main media sources.
In any case – there’s times when business interests align with employee interests, and this is one of them. The NYT is effectively saying with this lawsuit that OpenAI et al. have been stealing from them, and by proxy, the authors. A victory in this court case would strengthen author rights and ownership. A loss would mean big corporations can take anything made by the public, use it for their AI, and then charge money for it. The training materials have a quantifiable value in what a trained model sells for versus an untrained model.
Ok, I see. It’s “trickle-down economics”. Sorry, if you don’t like the term. Feel free to suggest a better one.
The simple fact is, it won’t work.
There is no reason for the NYT, or any other newspaper, to share the profit. I’m not saying that none of the owners will, but most won’t. Even the generous ones won’t bother tracking down former employees or their heirs. In fairness, that’s not economics. It’s just an observation about how people behave. I do note, though, that you are not actually claiming that the authors will get paid.
It won’t make newspapers more profitable, either. The owners of old newspapers will be able to extract a rent for their archives. But where would the extra cash flow for a new newspaper come from? You could say that they have a new potential buyer. But the US population is growing and every new person in the US is a new potential buyer. Every new business is a potential new advertising client. Having a new potential buyer is just not going to make the difference. Although, I do note that you are not actually claiming that this would make newspapers more profitable.
At least I can say that in the last paragraph, you are wrong:
A victory in this court case would strengthen author rights and ownership.
No. It will not strengthen authors. Strengthening ownership strengthens owners. Strengthening ownership of buildings, strengthens landlords and does not strengthen construction workers. They have already been paid in full.
I don’t know if the poster, I originally replied to, agrees with you, but I can definitely say that I simply do not share your ideology. I hold the view that intellectual property is a privilege granted by society, for the benefit of society. Call that socialism if you like, it’s in the US Constitution. OTOH, You clearly believe that IP entitles someone to a benefit from society, regardless of any harm to society. I don’t know if you believe in these suggested trickle-down benefits. I find it disturbing that you actually did not go so far as to make a definitive claim.
If IP is freely given to AI developers, and the AI they create is freely given to the public, I have no issue at all. In fact, I’d love for that to happen. What I don’t want is what you describe at the end – companies use public and freely given IP to create a closed product sold for profit. And that’s exactly what I see these AI as. As long as they have a premium model, they’re benefiting from society and profiting privately.
I think we actually agree on a fundamental level what the ideal situation and dynamic should be. I just think the NYT winning the court case brings us closer to that ideal. And I think we both detest the idea of OpenAI getting rich off of other people.
I really doubt we agree on any level. You’re pitching a neo-feudal hellhole. I don’t know if you believe that you will be one of the lucky few, or if you believe that the magic of the market will fix things. If it’s the former, then play the lottery instead. If it’s the latter, then you are just wrong. If you believe that you are just arguing for good ole American capitalism, then you are deluding yourself. This is the kind of nonsense that was abolished at the birth of the US, or any other developed nation. It won’t work. Never has, never will.
We agree that information should be freely given and provided to create products that are freely given and provided. We agree that it’s bad for information to be freely given to create products which are sold for private gain.
I’m not sure you’re understanding what I’m saying. Do you disagree with any of the above? When it comes to this specific court case, either the NYT will win or OpenAI will win, and I’m saying the NYT winning is the better of the two outcomes. I’m not saying it’s the ideal we aspire to by any means.
I don’t think I agree with any of that. I’m not sure if I understand any of that.
I hold the view that intellectual property is a privilege granted by society, for the benefit of society.
We agree that information should be freely given…
I guess information means facts and data that cannot be intellectual property? I don’t necessarily agree that this should be freely given, depending on what “should” means. Unearthing facts takes effort and money. The logic behind some kinds of IP, like patents, is that it is supposed to allow eg inventors to monetize their efforts. Society benefits by having more inventions/information. Put another way, it gives people together a way to pay inventors without working through the government.
In some cases, it would cause disproportionate harm to society to enforce a monopoly on certain information. Say, some newspaper sleuths uncover a corruption scandal. As soon as they publish, all the other news media will pick it up and report on it. I don’t think it’s a good thing for society that this is so hard to monetize. But I don’t have a solution.
… and provided to create products that are freely given and provided.
I’ve already mentioned that I agree with patents, despite all abuses of the system. Patents provide a more direct incentive than government funding to think of ways of improving things. It also allows people to vote with their wallet, whether the effort is worth it. Electing representatives that decide on taxes and budgets, and watch over government officials giving grants, is extremely indirect. The patent system cannot replace government funding, but I believe that it is a beneficial complement.
We agree that it’s bad for information to be freely given to create products which are sold for private gain.
So, obviously I don’t agree with this. In fact, I don’t even understand why it would be bad. Why is it bad?
When it comes to this specific court case, either the NYT will win or OpenAI will win, and I’m saying the NYT winning is the better of the two outcomes.
How am I supposed to make sense of that in light of your first paragraph? Apparently, the second sentence (“…sold for private gain”) is the absolute, over-riding concern. I don’t understand why. I especially don’t understand why it is so important to you, that you want to do away with free information if you can’t have that.
Obviously, this implies opposition to any kind of “public domain” information (expired patents or copyrights, scientific facts and laws, and so on…), until we have some kind of communist economic system. I don’t know if you have thought it through to that point.
It certainly seems illegal, but if it is, then all search engines are too. They do the same thing. Search engines copy everything to their internal servers, index it, then sell access to that copyrighted data (via ads and other indirect revenue generators).
Definitely not. Search engines point you to the original and aren’t by any means selling access. That is the resources are accessible without using a search engine. LLMs are different because they do fold the inputs into the final product in a way that makes accessing the original material impossible. What’s more LLMs can fully reproduce copyrighted works and will try to pass them off as it’s own work.
That seems the only missing part. Openai should provide a list of the links used to give it’s response.
That is the resources are accessible without using a search engine.
I don’t understand what you mean? The resources are accessible whether you have a dumb or smart parser for your search.
What’s more LLMs can fully reproduce copyrighted works
Google has entire copyrighted works copied on its servers. It’s how you can query a phrase and get a reply back. They are selling the links to the copyrighted work. If Google had a bug in its search engine ui like openai, you could get that copyrighted data from Google’s servers. Google has “preview page” which gives you a page of copyrighted material without clicking the link. Then there was the Google Books lawsuit that Google won where several pages of copyrighted books are shown.
Your first point is probably where we’re headed but it still requires a change to how these models are built. Absolutely nothing wrong with an RAG focused implementation but those methods are not well developed enough for there to be turn key solutions. The issue is still that the underlying model is fairly dependent on works that they do not own to achieve the performance standards that that’ve become more or less a requirement for these sorts of products.
With regards to your second point is worth considering how paywalls will factor in. The Times intend to argue these models can be used to bypass their paywall. Something Google does not do.
Your third point is wrong in very much the same way. These models do not have a built in reference system under the hood and so cannot point you to the original source. Existing implementations specifically do not attempt to do this (there are of course systems that use LLMs to summarize a query over a dataset and that’s fine). That is the models themselves do not explicitly store any information about the original work.
The fundamental distinction between the two is that Google does a basic amount of due diligence to keep their usage within the bounds of what they feel they can argue is fair use. OpenAI so far has largely chosen to ignore that problem.
Most likely the times could win a case on the first point. Worth noting, Google also respects robots.txt so if the times wanted they could revoke access and I imagine that’d be considered something of an implicit agreement to it’s usage. OpenAI famously do not respect robots.txt.
Google books previews are allowed primarily on the basis that you can thumb through a book at a physical store without buying it.
These models can still be trained on data that they’re allowed to use, but I think that what we’re seeing is that the better LLM services are probably trained with shocking amounts of private data, whereas the less performant probably don’t use stolen data.
Textbooks are a big one that I suspect we’ll probably see a set of suits over. Particularly because they seem to be some of the most valuable training data.
The existing industry that’s popped up around LLMs has conveniently ignored that what these models are doing may have been illegal the whole time and a lot of the experts knew it. This is why it’s so important for folks to realize that the industry is not just thin wrappers around ChatGPT (and that interesting applications of this technology are largely being pushed out by the lowest hanging fruit). If this is ruled as not fair use then the whole industry will basically disappear overnight and we’ll have to rebuild it from scratch either with a new business model that pays authors or as open source/crowd sourced models (probably both). All that said we’re almost certainly better off. Open AI may have kicked off the most recent “gold rush” but their methods have been terrible for both the industry at large and for further development of the tech.
It always should have had the right business model where they paid for this access for AI training. They knew it was wrong but in their rush to be known they decided it was better to take without asking and then ask for forgiveness later. Regardless what happens now, people have already made a name for themselves swindling the likes of Microsoft out of it and will have long well-paying careers from it.
It seems like it was almost necessary to go through this phase for the sake of developing the tech. Doesn’t a lot of CS research uses web crawling algorithms to gather data without identifying that the information is licensed for such use? What about the fediverse? it remains unclear what the copyright and licensing will be should it come into question. There is no EULA to access fedi, just a set of open protocols.
Testing an algorithm for a paper with releasing the weights/data is not the same as selling the output of the algorithm.
It doesn’t matter: scraping data has and always been legal.
Depends where you live, my academic advisor set limits on scrapping due to past experience.
I seem to remember NYT suing Google years ago for effectively the same thing. Google copies all NYT articles into it’s index, then sells ads for people to search for that copyrighted information.
Nah, most of these companies have more than enough of their own users to mine data from.
This is a fair point with regards to a handful of companies (Microsoft, Google, Meta) but there will still be an immediate loss in quality as they go back to basics on their data pipelines. Given how long they’ve spent playing catch up in this space, I suspect progress will be pretty slow from there
This is cute naiveté. This case will drag on for years and eventually be settled behind closed doors.
The times won’t be the only ones. Just the first
Why would the NYT pay the authors again?
I don’t see why we would be better off if the NYT and other newspapers get a windfall profit. I don’t see the reasoning here at all.
ETA: 6 downvotes so far. Would anyone mind explaining what the problem is? I’m not lying when I say that I don’t see it.
Pretty sure you were downvoted because it looks like you’ve misunderstood. The NYT do, in fact, pay their authors.
Yes, which is why I have trouble understanding what you wrote earlier: “If this is ruled as not fair use then the whole industry will basically disappear overnight and we’ll have to rebuild it from scratch either with a new business model that pays authors[…]”.
Why would the NYT pay the authors again when their archive is used for AI training?
I don’t think NYT contributors should expect a payday out of this but the precedent set may mean that they could expect some royalties for future work that they own outright. The precedent is really the important part here and this will definitely not be the only suit.
Ok. Then it’s not about authors, but about copyright owners. Bit misleading to talk about authors, then.
FWIW it wouldn’t work. The NYT and other newspapers have their whole archives to sell. A few months of a daily newspaper is more than even someone like Stephen Kind has published in his entire life. It’s not even worth negotiating about such a tiny amount of writing. At best, you could do like with stock photography. They upload their texts to some website and accept whatever terms are offered. It might be a good business for some middle-men.
A prolific amateur might find it a welcome bit of extra cash. But the story doesn’t stop there.
The extra costs must be passed on to the user. You transfer wealth from the public to a few large-scale owners, aka rich people. And since these AIs are text generators, you can expect that actual authors will bear the brunt of that.
Do you think trickle down has ever worked?
Why do you keep trying to make this about trickle down? That’s not even sort of relevant to what’s going on here.
My preferred solution actually has these models being trained on crowd sourced open datasets and these models are primarily locally run.
Are you seriously trying to gaslight me? Like I can’t still read your original post…
Sure, you didn’t say “trickle down”. Call it whatever you like. It doesn’t change the facts.
I really don’t understand your argument. The best case scenario here is that LLMs become easily accessible and are largely unmonetized. That is OpenAI does not sell usage of the model nor are the models trained on things like news articles but instead look more like the OpenAssistant dataset (no relation to OpenAI).
Instead LLMs are strictly a conversational interface for interacting with arbitrary systems. My understanding of the limitations of this technology (I work in this space) means that’s the only thing we can ever hope for them to do in a resource efficient way. OpenAI and co have largely tried to obfuscate this fact for the purpose of maintaining our reliance on them for things that should be happening locally.
Edit: jk I’m gaslighting you because I’m a corporate plant. Trickle down trickle down Ronald Reagan is God
Edit 2: To add a little bit of context, OpenAI’s business model currently consists of burning money and generating hype. A ruling against them would destroy them financially as the there’s no way theyd be able to pay for all of their training data on top of the money they’re already using.
The NYT as a company is much closer to its authors than AI is to its authors. When it exercises copyright, the owners of those copyrights are the NYT, but the authors are the … Well, authors. You’re right that a victory means newspapers get a lot more money.
… But would that be a bad thing? If newspapers become more profitable again, maybe we can see a resurgence of local papers and more reliable news. Instead of MSNBC and Fox and CNN, various papers could be our main media sources.
In any case – there’s times when business interests align with employee interests, and this is one of them. The NYT is effectively saying with this lawsuit that OpenAI et al. have been stealing from them, and by proxy, the authors. A victory in this court case would strengthen author rights and ownership. A loss would mean big corporations can take anything made by the public, use it for their AI, and then charge money for it. The training materials have a quantifiable value in what a trained model sells for versus an untrained model.
Ok, I see. It’s “trickle-down economics”. Sorry, if you don’t like the term. Feel free to suggest a better one.
The simple fact is, it won’t work.
There is no reason for the NYT, or any other newspaper, to share the profit. I’m not saying that none of the owners will, but most won’t. Even the generous ones won’t bother tracking down former employees or their heirs. In fairness, that’s not economics. It’s just an observation about how people behave. I do note, though, that you are not actually claiming that the authors will get paid.
It won’t make newspapers more profitable, either. The owners of old newspapers will be able to extract a rent for their archives. But where would the extra cash flow for a new newspaper come from? You could say that they have a new potential buyer. But the US population is growing and every new person in the US is a new potential buyer. Every new business is a potential new advertising client. Having a new potential buyer is just not going to make the difference. Although, I do note that you are not actually claiming that this would make newspapers more profitable.
At least I can say that in the last paragraph, you are wrong:
No. It will not strengthen authors. Strengthening ownership strengthens owners. Strengthening ownership of buildings, strengthens landlords and does not strengthen construction workers. They have already been paid in full.
I don’t know if the poster, I originally replied to, agrees with you, but I can definitely say that I simply do not share your ideology. I hold the view that intellectual property is a privilege granted by society, for the benefit of society. Call that socialism if you like, it’s in the US Constitution. OTOH, You clearly believe that IP entitles someone to a benefit from society, regardless of any harm to society. I don’t know if you believe in these suggested trickle-down benefits. I find it disturbing that you actually did not go so far as to make a definitive claim.
If IP is freely given to AI developers, and the AI they create is freely given to the public, I have no issue at all. In fact, I’d love for that to happen. What I don’t want is what you describe at the end – companies use public and freely given IP to create a closed product sold for profit. And that’s exactly what I see these AI as. As long as they have a premium model, they’re benefiting from society and profiting privately.
I think we actually agree on a fundamental level what the ideal situation and dynamic should be. I just think the NYT winning the court case brings us closer to that ideal. And I think we both detest the idea of OpenAI getting rich off of other people.
I really doubt we agree on any level. You’re pitching a neo-feudal hellhole. I don’t know if you believe that you will be one of the lucky few, or if you believe that the magic of the market will fix things. If it’s the former, then play the lottery instead. If it’s the latter, then you are just wrong. If you believe that you are just arguing for good ole American capitalism, then you are deluding yourself. This is the kind of nonsense that was abolished at the birth of the US, or any other developed nation. It won’t work. Never has, never will.
We agree that information should be freely given and provided to create products that are freely given and provided. We agree that it’s bad for information to be freely given to create products which are sold for private gain.
I’m not sure you’re understanding what I’m saying. Do you disagree with any of the above? When it comes to this specific court case, either the NYT will win or OpenAI will win, and I’m saying the NYT winning is the better of the two outcomes. I’m not saying it’s the ideal we aspire to by any means.
I don’t think I agree with any of that. I’m not sure if I understand any of that.
I hold the view that intellectual property is a privilege granted by society, for the benefit of society.
I guess information means facts and data that cannot be intellectual property? I don’t necessarily agree that this should be freely given, depending on what “should” means. Unearthing facts takes effort and money. The logic behind some kinds of IP, like patents, is that it is supposed to allow eg inventors to monetize their efforts. Society benefits by having more inventions/information. Put another way, it gives people together a way to pay inventors without working through the government.
In some cases, it would cause disproportionate harm to society to enforce a monopoly on certain information. Say, some newspaper sleuths uncover a corruption scandal. As soon as they publish, all the other news media will pick it up and report on it. I don’t think it’s a good thing for society that this is so hard to monetize. But I don’t have a solution.
I’ve already mentioned that I agree with patents, despite all abuses of the system. Patents provide a more direct incentive than government funding to think of ways of improving things. It also allows people to vote with their wallet, whether the effort is worth it. Electing representatives that decide on taxes and budgets, and watch over government officials giving grants, is extremely indirect. The patent system cannot replace government funding, but I believe that it is a beneficial complement.
So, obviously I don’t agree with this. In fact, I don’t even understand why it would be bad. Why is it bad?
How am I supposed to make sense of that in light of your first paragraph? Apparently, the second sentence (“…sold for private gain”) is the absolute, over-riding concern. I don’t understand why. I especially don’t understand why it is so important to you, that you want to do away with free information if you can’t have that.
Obviously, this implies opposition to any kind of “public domain” information (expired patents or copyrights, scientific facts and laws, and so on…), until we have some kind of communist economic system. I don’t know if you have thought it through to that point.
It certainly seems illegal, but if it is, then all search engines are too. They do the same thing. Search engines copy everything to their internal servers, index it, then sell access to that copyrighted data (via ads and other indirect revenue generators).
Definitely not. Search engines point you to the original and aren’t by any means selling access. That is the resources are accessible without using a search engine. LLMs are different because they do fold the inputs into the final product in a way that makes accessing the original material impossible. What’s more LLMs can fully reproduce copyrighted works and will try to pass them off as it’s own work.
That seems the only missing part. Openai should provide a list of the links used to give it’s response.
I don’t understand what you mean? The resources are accessible whether you have a dumb or smart parser for your search.
Google has entire copyrighted works copied on its servers. It’s how you can query a phrase and get a reply back. They are selling the links to the copyrighted work. If Google had a bug in its search engine ui like openai, you could get that copyrighted data from Google’s servers. Google has “preview page” which gives you a page of copyrighted material without clicking the link. Then there was the Google Books lawsuit that Google won where several pages of copyrighted books are shown.
Your first point is probably where we’re headed but it still requires a change to how these models are built. Absolutely nothing wrong with an RAG focused implementation but those methods are not well developed enough for there to be turn key solutions. The issue is still that the underlying model is fairly dependent on works that they do not own to achieve the performance standards that that’ve become more or less a requirement for these sorts of products.
With regards to your second point is worth considering how paywalls will factor in. The Times intend to argue these models can be used to bypass their paywall. Something Google does not do.
Your third point is wrong in very much the same way. These models do not have a built in reference system under the hood and so cannot point you to the original source. Existing implementations specifically do not attempt to do this (there are of course systems that use LLMs to summarize a query over a dataset and that’s fine). That is the models themselves do not explicitly store any information about the original work.
The fundamental distinction between the two is that Google does a basic amount of due diligence to keep their usage within the bounds of what they feel they can argue is fair use. OpenAI so far has largely chosen to ignore that problem.
The Google preview feature bypasses paywalls. Google Books bypasses paywalls. Google was sued and won.
Most likely the times could win a case on the first point. Worth noting, Google also respects robots.txt so if the times wanted they could revoke access and I imagine that’d be considered something of an implicit agreement to it’s usage. OpenAI famously do not respect robots.txt.
Google books previews are allowed primarily on the basis that you can thumb through a book at a physical store without buying it.
If that’s the standard then any NYT article that has been printed is up for grabs because you can read a few pages of a newspaper without paying.
These models can still be trained on data that they’re allowed to use, but I think that what we’re seeing is that the better LLM services are probably trained with shocking amounts of private data, whereas the less performant probably don’t use stolen data.
Textbooks are a big one that I suspect we’ll probably see a set of suits over. Particularly because they seem to be some of the most valuable training data.
If the companies can profit of stealing work and charging access for it, why can’t I just pirate it myself without making anyone richer?