• Shinji_Ikari [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    1 year ago

    If the results were also open and public, it’d be a different conversation.

    This is more akin to rain water collection up-hill and selling it back to the people downhill. It’s privatization of a public resource.

    • cooljacob204@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      This is more akin to rain water collection up-hill and selling it back to the people downhill

      Not really, anyone can go and collect the same water they are collecting. And it’s happening, open source LLMs are quickly catching up and a shit ton of other companies are also crawling the exact same data.

      • pjhenry1216@kbin.social
        link
        fedilink
        arrow-up
        14
        ·
        1 year ago

        “anyone”. I hate when people use this word knowing full well it’s not true in meaning. “Nothing is stopping you from spending millions of dollars on your own LLM.” Ok.

        The web is a bunch of information that is public, sure. People don’t have a reasonable expectation of privacy but they used to have a reasonable expectation that their information would be used in a very specific fashion. Especially in the US where there is a default copyright claim on data. And crawling the web may ignore text that states you can’t use the data. Even if you include a clause saying by accessing the data you agree to the claim. That only works against little people. The “anyone” that can’t actually just go and build a LLM.

        • cooljacob204@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          1 year ago

          Sure but that applies to literally a million other things. There is an absolute ton of shit that companies do that individuals can’t which is still built off of others.

          A company can go spend 1B on a new state of the art nuclear reactor which will bring in billions over it’s life time. Will the physicist who discovered the underlying math see any of the profit? No, probably not. And if they do it won’t be nearly a “fair share”. Nor will all the publishers and authors who generated the learning materials that the people working for said company used to build it.

          There is tons of public knowledge that can only be utilized with a huge investment, that’s just how a lot of innovation works.

          And OpenAI also has a ton of competitors. Sure they have the lead for now but thousands of other companies are also scraping and building LLMs.

          • pjhenry1216@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            You’re not really going to win this argument as I’m an anti-capitalist. So I agree a lot of that stuff is wrong too. I don’t believe you should own other’s labor. The employees should own the company. And I don’t believe in copyright, but it does exist and it’s enforced against individuals, so it’s only fair it’s enforced against them as well. I don’t think you should be allowed to blindly scrape when information could be behind an agreement to use it in a specific manner if accessed. Plus I think it should be opt in based off it being a new use and therefore a new right of copyright. Just as suddenly actors need to worry if they’ll be scanned and owned by a Hollywood studio now. It’s something a reasonable person wouldn’t expect. And that’s why past works are protected from that use.

            Things behind a third party privacy policy, sure. You agreed to it, whatever. But your own website? I’m not feeling it.

    • little_water_bear@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This comparison is lacking because water is unlike data. The data can still be accessed exactly the same. It doesn’t become less and the access to it is not restricted by other people harvesting it.

      • mim@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        1 year ago

        While that is true, it does divert peoples’ clicks.

        Imagine you wrote a quality tech tutorial blog. Is it ok for OpenAI to take your content, train their models, and divert your previous readers away from your blog?

        It’s an open ethical question that it’s not straightforward to answer.

        EDIT: yes people also learn things and repost them. But the scale at which ChatGPT operates is unprecedented. We should probably let policy catch up. Otherwise we’ll end up with the mess we currently have by letting Google and Facebook collect data for years without restrictions.

      • Shinji_Ikari [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        it’s not a great comparison I’ll admit, but it’s essentially the same as digital privacy, only one of these is protected in courts and the other is encouraged.

        I haven’t sat down to really build a stance on this but it does not sit well.