• BiNonBi
    link
    fedilink
    arrow-up
    36
    ·
    1 year ago

    NPR reported that a “top concern” is that ChatGPT could use The Times’ content to become a “competitor” by “creating text that answers questions based on the original reporting and writing of the paper’s staff.”

    That’s something that can currently be done by a human and is generally considered fair use. All a language model really does is drive the cost of doing that from tens or hundreds of dollars down to pennies.

    To defend its AI training models, OpenAI would likely have to claim “fair use” of all the web content the company sucked up to train tools like ChatGPT. In the potential New York Times case, that would mean proving that copying the Times’ content to craft ChatGPT responses would not compete with the Times.

    A fair use defense does not have to include noncompetition. That’s just one factor in a fair use defense and the other factors may be enyon their own.

    I think it’ll come down to how “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes” and “the amount and substantiality of the portion used in relation to the copyrighted work as a whole;” are interpreted by the courts. Do we judge if a language model by the model itself or by the output itself? Can a model itself be uninfringing and it still be able to potentially produce infringing content?

    • fuzzywolf23@beehaw.org
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      The model is intended for commercial use, uses the entire work and creates derivative works based on it which are in direct competition.

      • BiNonBi
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        You are kind of hitting on one of the issues I see. The model and the works created by the model may b considered two separate things. The model itself may not be infringing in of itself. It’s not actually substantially similar to any of the individual training data. I don’t think anyone can point to part of it and say this is a copy of a given work. But the model may be able to create works that are infringing.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        uses the entire work

        This may not actually be true though. If it’s a Q&A interface, it’s very unlikely they are training the model on the entire work (since model training is extremely expensive and done extremely infrequently). Now sure, maybe they actually are training on NYT articles, but a similarly powerful LLM could exist without training on those articles and still answer questions about it.

        Suppose you wanted to make your own Bing Chat. If you tried to answer the questions entirely based on what the model is trained on, you’d get crap results because the model may not have been trained on any new data in over 2 years. More likely, you’re using retrieval-augmented generation (RAG) to select portions of articles, generally the ones you got from your search results, to provide as context to your LLM.

        Also, the argument that these are derivative works seems to be a bit iffy. Derivative works use substantial portions of the original work, but generally speaking a Q&A interface like this would be purely generative. With certain carefully-crafted prompts, it may be able to generate portions of the original work, but assuming they’re using RAG, it’s extremely unlikely they would generate the exact same content that’s in the article because they wouldn’t be using the entirety of the article for generation anyway.

        How is this any different from a person scanning an article and writing their own summary based on what they read? Is doing so a violation of copyright, and if so, aren’t news outlets especially notorious for doing this (writing articles based on the articles put out by other news outlets)?

        Edit: I should probably add as well, but search engines have been indexing and training models on the content they crawl over for years, and that never seemed to cause anyone to complain about copyright. It’s interesting to me that it’s suddenly a problem now.

    • ag_roberston_author@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      That’s something that can currently be done by a human and is generally considered fair use.

      That’s kind of the point though isn’t it? Fair use is only fair use because it’s a human doing it, not an algorithm.

      • BiNonBi
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        That is not actually one of the criteria for fair use in the US right now. Maybe that’ll change but it’ll take a court case or legislation to do.

    • SenorBolsa@beehaw.org
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      1 year ago

      I think there’s a good case that it’s transformative entirely. It doesn’t just spit out NYT articles. I feel like saying they “stole IP” from NYT doesn’t really hunt because that would mean anyone who read the NYT and then wrote any kind of article at some point also engaged in IP theft because almost certainly their consumption of the NYT influenced their writing in some way. ( I think the same thing holds up to a weaker degree with generative image AI just seems a bit different sometimes directly copying the actual brushstrokes etc of real artists there’s also only so many ways to arrange words)

      It is however an entirely new thing, so it’s up to judges for now to rule how that works.

      • AnarchoYeasty@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        I have it on good authority that the writers of the NYT have also read other news papers before. This blatant IP theft goes deeper than we could have ever imagined.