Meta “programmed it to simply not answer questions,” but it did anyway.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    54
    ·
    2 months ago

    Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”

    If you’re expecting a glorified autocomplete to know about things it doesn’t have in its training data, you’re an idiot.

    • Catoblepas
      link
      fedilink
      English
      arrow-up
      38
      ·
      2 months ago

      There are definitely idiots, but these idiots don’t get their ideas of how the world works out of thin air. These AI chatbot companies push the cartoon reality that this is a smart robot that knows things hard in their advertisements, and to learn otherwise you have to either listen to smart people or read a lot of text.

      • vaultdweller013@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I just assumed that its bs at first, but I also once nearly went unga bunga caveman against a computer from 1978. So I probably have a deeper understanding of how dumb computers can be.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      Yeah, the average person is the idiot here, for something they never asked for, and for something they see no value in. Companies threw billions of dollars at this emerging technology. Many things like Google Search have hallucinating, error-prone AI forced into the main product that is impossible to opt-out or use the (working) legacy version now…

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      Some services will use glorified RAG to put more current info in the context.

      But yeah, if it’s just the raw model, I’m not sure what they were expecting.