I believe the debate is not around storing the data - nobody, to my knowledge, blames Open AI for copy-pasting the internet on their servers. But they are using the data that belongs to everyone to produce a product they sell/intend to sell commercially. Quite a bit more tricky!
Extending analogy to us humanses, in order to learn a language we have to buy a book and read it, so we did pay someone for our knowledge we then sell. Did Open AI pay everyone for everything they fed to their skynet? Or maybe they used only “open source” stuff, so now they comply with all the licenses attached, do they?
A lot of people are indeed accusing OpenAI of stealing because they claim that LLMs can reproduce entire original works because there are misconceptions of how LLMs work. This is why even OpenAI came out stating that their models simply don’t store source information. I’ve seen people make that argument here and in other threads, so I’m assuming that’s why it’s written like that in the post.
Did Open AI pay everyone for everything they fed to their skynet?
But why should anyone pay to analyze freely available data? It’s a whole different process to build something new than to simply use the data. Like, I don’t see search giants paying to build their indexes where it’s arguably where their money is. And to OpenAI’s credit, they’re not even selling the data but they’re also giving their derived data back for free in its entirety. It sounds like a great deal to me!
in order to learn a language we have to buy a book and read it
I’m not sure if that’s true. I’m on my third language and I can confidently say that anyone can learn a language entirely from the mountains of freely-available resources. People are chomping at the bit to teach you their language. Likewise, even if I only used open source to learn to code, I wouldn’t need to copy anybody’s licenses to analyze their code to figure out how the implemented a feature so that I can build my own. Those are not patented ideas and it’s arguably what LLMs like ChatGPT do. (But I will say that GitHub Copilot is a little different because that one does seem to pull from repos directly because I think it pulls from GitHub using Bing.)
in order to learn a language we have to buy a book and read it, so we did pay someone for our knowledge we then sell.
What if an artist got inspiration from a Google image search, without paying the creators for that? I think that’s fine, and I don’t see why it’s suddenly wrong when a machine learning algorithm does it.
I believe the debate is not around storing the data - nobody, to my knowledge, blames Open AI for copy-pasting the internet on their servers. But they are using the data that belongs to everyone to produce a product they sell/intend to sell commercially. Quite a bit more tricky! Extending analogy to us humanses, in order to learn a language we have to buy a book and read it, so we did pay someone for our knowledge we then sell. Did Open AI pay everyone for everything they fed to their skynet? Or maybe they used only “open source” stuff, so now they comply with all the licenses attached, do they?
A lot of people are indeed accusing OpenAI of stealing because they claim that LLMs can reproduce entire original works because there are misconceptions of how LLMs work. This is why even OpenAI came out stating that their models simply don’t store source information. I’ve seen people make that argument here and in other threads, so I’m assuming that’s why it’s written like that in the post.
But why should anyone pay to analyze freely available data? It’s a whole different process to build something new than to simply use the data. Like, I don’t see search giants paying to build their indexes where it’s arguably where their money is. And to OpenAI’s credit, they’re not even selling the data but they’re also giving their derived data back for free in its entirety. It sounds like a great deal to me!
I’m not sure if that’s true. I’m on my third language and I can confidently say that anyone can learn a language entirely from the mountains of freely-available resources. People are chomping at the bit to teach you their language. Likewise, even if I only used open source to learn to code, I wouldn’t need to copy anybody’s licenses to analyze their code to figure out how the implemented a feature so that I can build my own. Those are not patented ideas and it’s arguably what LLMs like ChatGPT do. (But I will say that GitHub Copilot is a little different because that one does seem to pull from repos directly because I think it pulls from GitHub using Bing.)
What if an artist got inspiration from a Google image search, without paying the creators for that? I think that’s fine, and I don’t see why it’s suddenly wrong when a machine learning algorithm does it.