Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    When people say that the “model is learning from its training data”, it means just that, not that it is human, and not that it learns exactly humans. It doesn’t make sense to judge boats on how well they simulate human swimming patterns, just how well they perform their task.

    Every human has the benefit of as a baby training on things around them and being trained by those around them, building a foundation for all later skills. Generative models rely on many text and image pairs to describe things to them because they lack the ability to poke, prod, rotate, and disassemble for themselves.

    For example, when a model takes in a thousand images of circles, it doesn’t “learn” a thousand circles. It learns what circle GENERALLY is like, the concept of it. That representation, along with random noise, is how you create images with them. The same happens for every concept the model trains on. Everything from “cat” to more complex things like color relationships and reflections or lighting. Machines are not human, but they can learn despite that.

    • Eccitaze@yiffit.net
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.

      But when it’s pointed out that LLMs don’t learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldn’t be judged by human standards? I don’t know if it’s intentional on your part, but that’s a pretty classic example of a motte-and-bailey fallacy. You can’t have it both ways.

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      10 months ago

      In general I agree with you, but AI doesn’t learn the concept of what a circle is. AI reproduces the most fitting representation of what we call a circle. But there is no understanding of the concept of a circle. This may sound nit picking, but I think it’s important to make the distinction.

      That is why current models aren’t regarded as actual intelligence, although people already call them that…