BBC will block ChatGPT AI from scraping its content::ChatGPT will be blocked by the BBC from scraping content in a move to protect copyrighted material.

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    If only the BBC does it then sure, it’s pointless. If the BBC does it and you and I consider it, it might change things a bit. If we do and others do, including large websites, or author guilds starting legal actions in the US, then it does change things radically to the point of rendering OpenAI LLMs basically useless or practically unusable. IMHO this isn’t an action against LLMs in general, not e.g against researchers from public institutions building datasets and publishing research results, but rather against OpenAI the for-profit company that has exclusive right with the for-profit behemoth Microsoft which a champion of entrenchment.

    • V H@lemmy.stad.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The thing, is realistically it won’t make a difference at all, because there are vast amounts of public domain data that remain untapped, so the main “problematic” need for OpenAI is new content that represents up to data language and up to date facts, and my point with the share price of Thomson Reuters is to illustrate that OpenAI is already getting large enough that they can afford to outright buy some of the largest channels of up-to-the-minute content in the world.

      As for authors, it might wipe a few works by a few famous authors from the dataset, but they contribute very little to the quality of an LLM, because the LLM can’t easily judge during training unless you intentionally reinforce specific works. There are several million books published every year. Most of them make <$100 in royalties for their authors (an average book sell ~200 copies). Want to bet how cheap it’d be to buy a fully licensed set of a few million books? You don’t need bestsellers, you need many books that are merely sufficiently good to drag the overall quality of the total dataset up.

      The irony is that the largest benefactor of content sources taking a strict view of LLMs will be OpenAI, Google, Meta, and the few others large enough to basically buy datasets or buy companies that own datasets because this creates a moat for those who can’t afford to obtain licensed datasets.

      The biggest problem won’t be for OpenAI, but for people trying to build open models on the cheap.