The world’s most important knowledge platform needs young editors to rescue it from chatbots – and its own tired practices

Established in 2001, Wikipedia is an “old man” by internet standards. But the role it plays in our collective knowledge of the world remains astonishing. Content from the free internet encyclopedia appears in everything from high-school term papers and pub trivia questions to search engine summaries and voice assistants. Tools like Google’s AI Overviews and ChatGPT rely heavily on Wikipedia, although they rarely credit the site in their responses.

And therein lies the problem: as Wikipedia’s visibility diminishes, reduced to mere training data for AI applications, it also loses prominence in the minds of readers and potential contributors. When someone notices a topic that is poorly described on Wikipedia, they might feel motivated to correct it. But this can-do spirit goes away when the error comes through an AI summary, where the source of the information isn’t clear.

  • MicroWave@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 个月前

    Agreed. ChatGPT doesn’t like to cite sources. Microsoft CoPilot and Google Gemini do link to some sources, though not as accurate or thorough like Wikipedia.

    • Throw_away_migrator@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 个月前

      What I don’t understand is how Microsoft has/has Watson which was able to answer questions well enough to go on Jeopardy and dominate. And now, more than a decade later these LLMs absolutely suck at it.

      It makes me wonder if Watson was nothing more than a Mechanical Turk because what is out there now seems like a huge step backwards.

      • Carrolade@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 个月前

        They just work in entirely different ways. An car and a horse are both able to serve as transportation, but they aren’t anything alike in other ways. LLMs compared to previous sorts of bots are similar.

        The main difference is that an LLM isn’t fetching whole answers from some database somewhere. It’s generating them fresh. You have to hope it generates the right stuff, which it does a certain percentage of the time.