• Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    114
    ·
    1 month ago

    These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.

    They’re completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.

    If they receive an input that doesn’t have a strong correlation to their training, they just output whatever bullshit comes close, whether it’s true or not. Which makes them truly dangerous.

    And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”

    I can’t wait for this stupid AI craze to eat its own tail.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 month ago

      I generally agree with your comment, but not on this part:

      parroting the responses to questions that already existed in their input.

      They’re quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.

      They’re completely incapable of critical thought or even basic reasoning.

      Critical thought, generally no. Basic reasoning, that they’re somewhat capable of. And chain of thought amplifies what little is there.

      • AliasAKA@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 month ago

        I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      The current AI discussion I’m reading online has eerie similarities to the debate about legalizing cannabis 15 years ago. One side praises it as a solution to all of society’s problems, while the other sees it as the devil’s lettuce. Unsurprisingly, both sides were wrong, and the same will probably apply to AI. It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.

        I believe that some of the people in the middle will have more accurate views on the subject, indeed. However, note that there are multiple ways to be in the “middle ground”, and some are sillier than the extremes.

        For example, consider the following views:

        1. That LLMs are genuinely intelligent, but useless.
        2. That LLMs are dumb, but useful.

        Both positions are middle grounds - and yet they can’t be accurate at the same time.