In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

  • Chozo@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    That’s still not how LLMs work. I can’t believe everybody who is upset with them doesn’t understand this.

    The LLM has no idea what it’s reading. None. It’s just doing a word association game, but at a scale we can’t comprehend. It knows what arrangement of words go together, but it’s not reproducing anything with any actual intent. To get it to actually output anything that actually resembles a single piece of material it was trained against would require incredibly specific prompts to get there, and at that point it’s not really the LLM’s making anymore.

    There’s plenty of reasons to be against AI. Such as the massive amounts of data scraping that happens to train models, the possible privacy invasions that come with that, academic cheating, etc. But to be mad at AI for copyright infringement only shows a lack of understanding of what these systems actually do.

    • magic_lobster_party@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      The training process of LLMs is to copy the source material word for word. It’s instructed to plagiarize during the training process. The copyrighted material are possibly in one way or another embedded into the model itself.

      In machine learning, there’s always this concern whether the model is actually learning patterns, or if it’s just memorizing the training data. Same applies to LLMs.

      Can LLMs recite entire pieces of work? Who knows?

      Does it count as copyright infringement if it does so? Possibly.

      • ReCursing@kbin.social
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        The training process of LLMs is to copy the source material word for word. It’s instructed to plagiarize during the training process. The copyrighted material are possibly in one way or another embedded into the model itself.

        No it isn’t. That;s not how neural networks work, like at all

        In machine learning, there’s always this concern whether the model is actually learning patterns, or if it’s just memorizing the training data. Same applies to LLMs.

        It’s learning patterns. It’s not memorising training data. Again, not how the system works at all

        Can LLMs recite entire pieces of work? Who knows?

        No. No they can’t.

        Does it count as copyright infringement if it does so? Possibly.

        That’d be one for the lawyers were it to ever come up, but it won’t

        • magic_lobster_party@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Here’s a basic description of how (a part of) LLMs work: https://huggingface.co/learn/nlp-course/chapter1/6

          LLMs are generating texts word for word (or token by token if you’re pedantic). This is why ChatGPT is slowly generating the response word by word instead of giving you the entire response at once.

          Same applies during the training phase. It gets a piece of text and the word it’s supposed to predict. Then it’s tuned to improve its chances to predict the right word based on the text it’s given.

          Ideally it’s supposed to make predictions by learning the patterns of the language. This is not always the case. Sometimes it can just memorize the answer instead of learning why (just like how a child can memorize the multiplication table without understanding multiplication). This is formally known as overfitting, which is a machine learning 101 concept.

          There are ways to mitigate overfitting, but there’s no silver bullet solution. Sometimes it cannot help to memorize the training data.

          When GitHub Copilot was new people quickly figured out it could generate the fast inverse square root implementation from Quake. Word for word. Including the “what the fuck” comment. It had memorized it completely.

          I’m not sure how much OpenAI has done to mitigate this issue. But it’s a thing that can happen. It’s not imaginary.