• Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 month ago

    Using public facing data to build machine learning model is not against copyright laws. There is a transformative clause for a reason.

    Strengthening copyright laws will only hurt the open source scene and give companies like openai and google a soft monopoly.

    Not only that but the money is going to go to data brokers and platforms like reddit and getty. Individuals aren’t getting a dime.

    • unrelatedkeg@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      Yes. Just as entering through an unlocked door isn’t trespassing.

      Most of the sources also have copyright notices the model gobbles up, effectively making it more like trespassing and taking the no trespassing sign home as a souvenir.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      This is where we need something other than copyright law. The problem with generative AI companies isn’t that somebody looked at something without permission, or remixed some bits and bytes.

      It’s that their products are potentially incredibly harmful to society. They would be harmful even if they worked perfectly. But as they stand, they’re a wide-open spigot of nonsense, spewing viscous sludge into every single channel of human communication.

      I think we can bring out antitrust law against them, and labor unions are also a great tool. Privacy, and a right to your own identity factor in, too. But I think we’re also going to need to develop some equivalent of ecological protections when it comes to information.

      For a long time, our capacity to dump toxic waste into the environment was minuscule compared to the scale of the natural world. But as we automated more and more, it became clear that the natural world has limits. I think we’re headed towards discovering the same thing for the world of information.