• ag_roberston_author@beehaw.org
    link
    fedilink
    English
    arrow-up
    58
    ·
    edit-2
    1 year ago

    I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.

    You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.

    • SinJab0n@mujico.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        It’s more like reading a book and then charging people to ask you questions about it.

        AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.

        • pips@lemmy.film
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          But when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.

          When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.

          • Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            LLMs can provide original output, but they can also make errors. You’d have to prove it meets the grounds for plagiarism, and to my knowledge no one’s been able to. It’s all been claims with no substance or merit so far.

            • pips@lemmy.film
              link
              fedilink
              English
              arrow-up
              12
              ·
              1 year ago

              An LLM can’t make something original, it can only make something derivative. But that derivative work isn’t the same as when a human makes a derivative work because a human isn’t writing each word or phrase based on the likely “correct” next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.

              In short, an LLM’s output is necessarily a string of preexisting human inputs. A human’s output, while it can be informed by and reference other human inputs, can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.

              • Even_Adder@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                You’re making a hasty generalization here, namely by making sweeping claims without evidence or examples. Also, you’re begging the question by assuming that humans are more original than LLMs, again without providing any support or justification.

                Take for example this study that found doctors prefered Med-paLM’s output to human doctor’s. If Everything is a remix, there’s no reason LLMs can’t meet the minimum criteria for creativity, especially absent any evidence to the contrary.

                • pips@lemmy.film
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  ·
                  1 year ago

                  You’re making a hasty generalization here

                  I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.

                  For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.

                  The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.

                  For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.

        • ag_roberston_author@beehaw.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          It’s more like reading a book and then charging people to ask you questions about it.

          No, it’s really nothing like reading at all. Your example requires a human element. This is just the consumption of data, not reading.

            • ag_roberston_author@beehaw.org
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 year ago

              I don’t think that it is even remotely close to being the same thing. I’m sorry but we shouldn’t be affording companies the ability to profit off other people’s creations without their consent, regardless of how current copyright law works.

              Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.

              There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.

              There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.

              • Even_Adder@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Human writing and LLM output can be creative, original, informative, or useful, depending on the context and purpose. It is a tool to be used by humans, we are in control of the input and the output. What we say goes, no one ever has to see LLM output without people making those decisions. Restricting LLMs is restricting the people that use them. Mega-corporations will have their own models, no matter the price. What we say and do here will only affect our ability to catch up and stay competitive.

                You also seem to be arguing a slippery slope argument, by implying that if LLMs are allowed to use copyrighted books as data, it will lead to negative consequences for creators and society, without explaining how or why this will happen, or providing any evidence. It’s a one-sided look at the issue that ignores the positive outcomes from LLMs, like increasing accessibility, diversity, and quality of literature and thought. As well as inspiring new forms of expression and creativity.

                Finally, you seem to be making a moralistic fallacy. You claim that there is a perfectly reasonable way of doing this ethically, by using content that people have provided. However, you don’t support this claim, or address its challenges. How would you ensure that the content providers are the original authors or have the rights to the content? How would you compensate them for their contribution? Is this a good way to get content that is diverse and representative of different perspectives and cultures? What about bias or manipulation in the data collection and processing?

                I don’t think we need any more expansions to copyright, but a better understanding of LLMs’ capabilities and responsibilities. I think we need to be open-minded and critical about the potential and challenges of LLMs, but also be on guard against fallacious arguments or emotional appeals.

        • gus@beehaw.org
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.

        • SinJab0n@mujico.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Nah, false.

          If you as a PERSON, an individual without wanting to make profits do it, then yes it would be absurd.

          But, here is a corporation trying making exactly the same they have been doing with open source projects, making a real paywall out of others peoples work red hat, cough cough.

      • Double_A@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        It’s more like buying a book, studying everything in it, then charge people for tutoring them with the knowledge you got from the book.

        But now a machine is doing it, with all the books it can find…

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      One of the largest communities on Lemmy is !piracy@lemmy.dbzer0.com, so I’m not really surprised that there’s people that don’t care about copyright :)

      On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?

      • fulano@lemmy.eco.br
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Hold on, piracy isn’t necessarily not caring about copyright, but can be (and is, in a lot of cases), about fighting against the big corporations who take advantage of historically abusive copyright laws to dominate the market and prevent small authors and companies from surviving.

        These AI companies, despite being copyright violators, are much closer to the big IP monopolists than the small authors, which are victims of both groups.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          about fighting against the big corporations who take advantage of historically abusive copyright laws to dominate the market and prevent small authors and companies from surviving.

          If people were really that principled, they’d totally boycott the big corporations and only consume media from the small authors and companies.

          • fulano@lemmy.eco.br
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            You made a great point. This is exactly my issue with piracy. I believe it’s a movement in the wrong direction, because it actually benefits the big media in the end.

      • Chahk@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing?

        Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Said human presumably would have to purchase or lend a book in order to read it

          Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.

          What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.

          • jursed@beehaw.org
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            authors do get money from libraries that buy the books. and in some places they even get money depending on how much its checked out.

      • ag_roberston_author@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.

        Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.

          How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?

          • Storksforlegs@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            there is already lots of evidence that they have scraped copyrighted art and photographs for their datasets.

          • ag_roberston_author@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Well, the company has the training data, so I would imagine that will be part of discovery phase of the lawsuit.

            It will be a very quick case if OpenAI provides their training data and there is no data from Libgen and Z-library included in it.

    • FIash Mob #5678@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.

    • Safi Scarlett@sffa.community
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I agree. This technology doesn’t exist in a vacuum. This isn’t some utopia where a Human artist can just solely focus on creating their art and not worry about financial gain because their survival needs are always guaranteed to be met or whatever.

    • Quokka@quokk.au
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m pro AI and advancement, and anti-IP.

      I hope to see AI disrupt our capitalistic value of ownership further.

    • HughJanus@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      This is what I never understood about the whole training on AI thing.

      When a human creates an artwork, they don’t do it out of a vacuum. They’ve had a lifetime of inspiration from artwork they’ve discovered that inspires then to create something wholly new. AI does the same thing

      • luciole (he/him)@beehaw.org
        link
        fedilink
        English
        arrow-up
        24
        ·
        1 year ago

        The AIs we are talking about are large language models. They take human work as input and produce facsimiles. They are owned by individuals or companies that have no permission to exploit in this way intellectual property tied to other people’s livelihoods to copy them.

        LLMs are not sentient, they don’t have inspiration, they are not creative and therefore do not create in the sense an artist would. They are an elaborate mathematical equation.

        “Training” an AI has nothing to do with training an actual living being. It’s just tuning: adjusting an algorithm incrementally until the operator is satisfied with the result. I think it’s defendable to amount this form of extraction to plagiarism.

        • i_am_not_a_robot@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Most likely, if you ask ChatGPT to summarize a famous book, it does not need to have ever trained on the book itself. The easiest way for an LLM to create a summary of something is to base its summary off existing summaries created by humans. If it’s ruled in court that ChatGPT is infringing on the copyright of a book’s author only by repeating information it acquired from other summaries created by humans, what implications does that have for the humans who wrote the other summaries?

        • SinAdjetivos@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I partially agree with you, but I think you’re missing the end goal of Facebook et al.

          As HughJanus pointed out it’s not really any different than a person reading a book and by that reasoning using copyrighted material to train models like these falls well within the existing framework of “fair use”.

          However, that depends entirely on “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.” I agree completely with you that OpenAI’s products/business (the most blatant violator) does easily violate “fair use” due to that clause. However they’re doing it, at least partially, to “force the issue” on the open question of “how much can public information be privatized?” with the goal of further privatizing and increasing commercial applications of raw data.

          As you pointed out LLMs can only create facsimiles and not the original work, and by that logic they can’t exactly replicate the inputs either.

          No I don’t think artists can claim that they own any and all “cheap facsimiles” of their works, but by that same reasoning none of these models produced should be allowed to be the enforceable property of any individual/company either.

          For further reading check out:

          • Kelly v. Arriba Soft Corporation on why “thumbnails” (and by extension LLMs, “eigen-images”, etc.) are inherently transformatve and constitute fair use.
          • Bridgeport Music, Inc. v. Dimension Films for the negative impacts that ruling has had and how it still doesn’t protect the artists from their stuff being used for training and LLM.
          • “Variational auto-encoders” for understanding on how the latest LLMs actually do achieve a significant amount of “originality” and I would argue are able to be minimally creative.
      • Dominic@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        AIs are trained for the equivalent of thousands of human lifetimes (if not more). There’s no precedent for anything like this.

    • SinJab0n@mujico.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Dude, tell me, why do u think they have being doing this only with books and art but no music?

      Thats because music really has people protecting their assets. U can have ur opinion about it, but that’s the only reason they haven’t ABUSED companies and people’s work in music.

      It’s not reading, it’s the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation.

      • Dominic@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        There are a few reasons why music models haven’t exploded the way that large-language models and generative image models have. Maybe the strength of the copyright-holders is part of it, but I think that the technical issues are a bigger obstacle right now.

        • Generative models are extremely data-inefficient. The Internet is loaded with text and images, but there isn’t as much music.

        • Language and vision are the two problems that machine learning researchers have been obsessed with for decades. They built up “good” datasets for these problems and “good” benchmarks for models. They also did a lot of work on figuring out how to encode these types of data to make them easier for machine learning models. (I’m particularly thinking of all of the research done on word embeddings, which are still pivotal to large language models.)

        Even still, there are fairly impressive models for generative music.

  • RotaryKeyboard@lemmy.ninja
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    OP, I just wanted to say thank you for writing such a good title. It’s rare to get such an informative, clickbait-free title these days.

  • nothacking@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 year ago

    if asked by a user prompts chatGPT to summarize a copyrighted book, it will do so.

    So will a human. Let’s stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

    • SpaceToast@mander.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      This is why I am pro AI art. It’s no different than a human taking inspiration from other work.

      Nobody comes up with anything truly original. It’s all inspired by someone before them.

    • Dominic@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

      In the case of ChatGPT, it’s hard to tell. OpenAI won’t even reveal what their training dataset was.

      Researchers have done some tests to tease this out, and they’re pretty confident that it has read quite a few books and memorized them verbatim. See one of my favorite papers in a while, Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4.

    • Fauxreigner@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Beyond that, it’ll try to summarize a book, but it often can’t do so successfully, although it will act like it has. Give it a try on something that is even a little bit obscure and it can’t really give you good information. I tried with Blindsight, which is not something that’s in the popular culture, but also a Hugo nominee, so not completely obscure. It knew who the characters were, and had a general sense of the tone, but it completely fabricated every major plot point that I asked about. Did the same with A Head Full of Ghosts, which is more well known but still not something everyone has read, and it did the same thing.

      One thing I found that’s really fun is to ask it a question and then follow up with something like “Are you sure about that?” It’ll almost always correct itself and make up something else. It’ll go one step further and incorporate details you ask about. Give it a prompt like “Are you sure this character died of natural causes? I thought they were killed by Bob” and it will very frequently say you’re right and make up a story along those lines that’s plausible within the text. It doesn’t work on really popular stuff; you can’t convince it that Optimus Prime saves Luke Skywalker in RotJ, but anything even a little less well known and it’ll tell you details with complete confidence that it’s making up whole cloth.

      • nothacking@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Another highly amusing thing to do is to ask it about non existent chemicals or antenna types. (Try “inverted tripole” or “dinitrogen azide”) It always generates plausible but incorrect answers (eloquent bullshit).

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    Now that’s interesting. I really have been waiting for something like this. Wonder if the LLM companies now actually have to explain where their models get the detailed information about the book from. Or if they can get away with stating that they have no idea how their own system works

  • Storksforlegs@beehaw.org
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 year ago

    People keep taking issue with this articles use of “summarizing” and linking to wikipedia… Summaries of copyrighted work are obviously not illegal.

    This article is oversimplified and does a crummy job of explaining the problem. Ars Technica does a much better job explaining.

    The fact that the ai can summarize these works in detail is proof that they were trained using copyrighted material without permission, (which is not fair use) Sarah Silverman is obviously not going to be hurt financially by this, but there are hundreds of thousands of authors who definitely will be affected. They have every right to sue.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Why does “fair use” even fall into it? I’m not familiar with their specific license, but the general definition of copyright is:

      A copyright is a type of intellectual property that gives its owner the exclusive right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time.

      Nothing was copied, or distributed (in a form that anybody can consider “The Work”), or displayed, or performed. The only possible legal argument they have is adapting as a derivative work. And anybody who is familiar with how an LLM works knows that the form that results from reading in content is completely different from the source.

      LLMs/LDMs are not taking in billions of books and putting them into a database. It is a very lossy process. Out of all of the billions of images trained from the Stable Diffusion database, the resulting model is 4 GBs. There is no universe where you can store billions of images into a mere 4 GBs. Stable Diffusion cannot and will not, pixel-by-pixel, reproduce a Van Gogh. It can make something that kind of looks like a Van Gogh, but styles are not copyrightable.

      The same applies to an LLM like ChatGPT. It cannot reproduce entire books, or anywhere close to that. If you ask it to recreate Page 25 of Silverman’s book, it can’t do it. If it doesn’t even contain a minor portion of the original material, it can’t even be considered a derivative work.

      They don’t have a case. They have a lot of publicity and noise, but they will lose to inevitability.

      • Storksforlegs@beehaw.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        You make a lot of excellent points, but I think the main issue of contention is just using copyrighted work to train generative AI without the author’s permission regardless.

        If they did ask permission, there would be no problem. But an author or artist should be given the choice if their work is going to be used to train an AI.

        Thats the contention. Not necessarily that their work was reproduced exactly using AI, but that the works were taken despite copyright law.

        I tend to think copyright law is off the rails a lot of the time, dont get me wrong. Big corporations and trolls make a mockery of it, but I think it still matters to individual artists quite a bit. Creators and their work need legal protections.

        • Saik0@lemmy.saik0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          You make a lot of excellent points, but I think the main issue of contention is just using copyrighted work to train generative AI without the author’s permission regardless.

          If I read a book at the library… and come up with an amazing revolutionary product. Then make a company and go on to make billions of dollar per year. The original book Author has no claim to my income.

          There’s no contention. This is just a money grab. Copyright doesn’t disallow people from consuming the content as they please. It simply disallows someone to pass off the original works as your own when it’s not.

          • Storksforlegs@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Well yeah, art is made to be consumed by people.
            And all art is inspired by other art. People write scifi books after reading other scifi books etc Thats not the issue here.

            The issue is artists should be able to opt out of having their work taken and fed into a big project they have no control over.

            • Saik0@lemmy.saik0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Hard disagree. If my “company” from the previous post is a company that simply cribnotes and reviews books… You can’t stop me from doing that either. Don’t see people chomping the bit to take down other sites that have been doing this for decades.

              • ag_roberston_author@beehaw.org
                link
                fedilink
                English
                arrow-up
                8
                ·
                1 year ago

                Don’t see people chomping the bit to take down other sites that have been doing this for decades.

                But this hasn’t been happening for decades. Machine learning algorithms are an incredibly new way of processing data. All those scenarios you are talking about required a human to be the one doing the reading and summarising, which for most authors is fine, they expect people to read their work and summarise it, or quote it.

                What they don’t expect is for that work to be fed in full into a private companies data set to train a machine how to duplicate their content at speeds completely incomparable to human capabilities. We’re talking about something completely new, completely unseen and you’re disregarding the rights of those creators to not want their art, music or writing to be fed into the endless churn of data for these megacorporations.

                Also, it’s champing at the bit, not chomping.

                • Saik0@lemmy.saik0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  and you’re disregarding the rights of those creators to not want their art, music or writing to be fed into the endless churn of data for these megacorporations.

                  … I don’t see the authors having any rights at all once the work is publish and sold. That’s the point of SELLING the book. It’s letting people do with it what they please. That’s called “ownership”. If I want to buy every copy of your book that I can get my hands on in a store and set it on fire… You have no say in it, no matter what. I purchased the book. That’s it. If I’m literally a Nazi reading the Diary of Anne Frank, nobody gets to tell me that I’m not allowed to check the book out of the library. Your “rights” to the copyright of the book are irrelevant to my rights of ownership of the book. Or the libraries rights to loan the book out to whomever.

                  Also, it’s champing at the bit, not chomping.

                  Really don’t care about grammar nazi-ing… and tell that to my phones autocomplete.

            • confusedbytheBasics@beehaw.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              The issue is artists should be able to opt out of having their work taken and fed into a big project they have no control over.

              So in your opinion a should University have to ask each authors permission before using their work as a reference for each study run there one by one?

              • Storksforlegs@beehaw.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 year ago

                There is already a well established practise of getting permission in academic settings for reprinting written work/journal articles/etc. etc. And all published authors and academics understand that their work will be read, maybe used in an academic setting, summarized, debated, discussed, quoted, etc. Getting permission is definitely a thing in academia.

                • confusedbytheBasics@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  Sure, permission needs to be sought for reprinting. That’s not what we are discussing though. I will just take your word on that second part because as far as I know none of my professors asked author permission before telling the class to read anything.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I think the main issue of contention is just using copyrighted work to train generative AI without the author’s permission regardless.

          You must define that in legal terms. This is a lawsuit, after all. It’s not illegal to “just use” copyrighted work. The words “generative AI” are not in a federal or state bill anywhere in the US.

          They can have an “issue of contention” all they want, but if they can’t prove anything legally, they have nothing.

  • world_hopper@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    A lot of these comments are missing a large point which is that, if the claim is true, the books are being pirated and then effectively used for a commercial application.

    So the authors are losing money through this process and did not give their permission for their work to be used in a commercial way.

    The decision of this case will be wildly important for the development of AI.

    • monobot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If they have access to some library with those books they are ok.

      I doubt they just used pirated book to train their AI and than publishing it without having non pirated paper trail, it is not that hard.

      But let’s see.

      The only a problem here is how have they accessed the books, they don’t share copyrighted material to others. But I don’t think anyone should be held guilty for reading a book, so I hold the same stance for AI.

      If you don’t want people to read your book, just don’t publish it.

      • world_hopper@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Do you know what library genesis and z-library are?? They are literally libraries of pirated materials.

        And yeah they can read the book, but shouldn’t be able to use its content in a commercial way (e.g. make money) off of its contents without the permission of the writer/copyright holder.

  • CreativeTensors@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    My pie in the sky hope is that copyright somehow becomes less stringent after all of this.

    Don’t get me wrong I want protections for creators and support reasonable copyright (life of the author +25 years with the possibility of a 15 year extension) but letting a company lord over an IP for damn near a century isn’t ideal for anyone.

    • EvilColeslaw@beehaw.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      The major scenario that I at least hope holds true out of this is that the AI “creations” aren’t eligible for copyright themselves. If the powers that be allow all this AI created stuff copyright protection it’s going to be a gigantic mess.

      • CreativeTensors@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Pure “prompt → image” with nothing in between I absolutely agree. It’s lazy and ripe for abuse by copyright trolls. That being said there’s a lot more in the world of AI assisted art than what most people are aware of.

        Determining where the legal lines will be drawn is going to be a monumental task but I think there’s value in allowing authors to retain copyright on AI assisted works. I also can’t see the free open source models not going the way of restricting training data to public domain works like Adobe did with Firefly if that becomes a legal issue.

  • Ganbat@lemmyonline.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    If they’re being trained via Library Genesis and Z-Library, shouldn’t those be the target of the suit for enabling/allowing that?

  • Sigma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I guess she found a way to make money on a book nobody is buying after all.

    • middlemuddle@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      They made a musical out of it so I’m sure it sold just fine. The pointless disparaging based on no facts isn’t very useful to this topic.

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Seems very improbable that they scraped a pirate website with forced registration and tight daily download limits (10 books a day max?) to get content that’s often mislabeled and not presented in an homogeneous way.

    Probably it’s just using the excerpt from Amazon (which instead with paid API access is much more easy to access) as a prompt and build on it

    • luciole (he/him)@beehaw.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      There’s been ongoing suspicions that pirated content was used to train popular LLMs simply because popular datasets used for training LLMs do include such content. The Washington Post did an article about it.

      Google’s C4 dataset used for research included illegal websites. What remains to be seen is if it was cleaned up before training Bard as we know it today. OpenAI as revealed nothing on its dataset.