• Sludgehammer@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      29 days ago

      Or to put it another way, LLM’s are advanced chatbots. Their purpose is to generate credible sounding text, not accurate text.

      • chemical_cutthroat@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        29 days ago

        But, like a human, it mostly tries to stick to the truth. It does get things wrong, and in that way is more like a 5 year old, because it won’t understand that it is fabricating things, but there is a moral code that they are programmed with, and they do mostly stick to it.

        To write off an LLM as a glorified chatbot is disingenuous. They are capable of produce everything that a human is capable of, but in a different ratio. Instead of learning everything slowly over time and forming opinions based on experience, they are given all of the knowledge of humankind and told to sort it out themselves. Like a 5 year old with an encyclopedia set, they are gonna make some mistakes.

        Our problem is that we haven’t found the right ratios for them. We aren’t specializing the LLMs enough to make sure they have a limited enough library to pull from. If we made the datasets smaller and didn’t force them into “chatbot” roles where they are given carte Blanche to say whatever they say, LLMs would be in a much better state than they currently are.

          • chemical_cutthroat@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            29 days ago

            Is that any different than a human moral code? We like to think we have some higher sense of “truth” but in reality we are only parroting that “facts” we hold as true. Through our history we have professed many things as truth. My favorite fact that I just learned yesterday is that we didn’t discover oxygen until after the founding of the United States. Are the humans before 1776 any less human than us? Or are they trained on a limited data set, telling people that the “miasma” is the cause of all their woes?

    • runeko@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      29 days ago

      If Elon had a parrot that constantly said “Elon is a Nazi”, it would be in the news.

      • The Quuuuuill@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        29 days ago

        you’d think, but he has a kid spouting off shit we’re not talking enough about, and that kid’s at the age where he’s saying whatever his dad says

    • taiyang@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      29 days ago

      Kinda like how self driving cars are still safer than the average driver, ya. Do they make mistakes? For sure, although the bigger annoyance is just how slow they are to turn sometimes. AI would be so so at leading but man is the bar low with Americans.