• aname@lemmy.one
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    8 months ago

    but its a far fetch from an intelligence. Just a very intelligent use of statistical methods.

    Did you know there is no rigorous scientific definition of intelligence?

    Edit. facts

    • bbuez@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      8 months ago

      We do not have a rigorous model of the brain, yet we have designed LLMs. Experts of decades in ML recognize that there is no intelligence happening here, because yes, we don’t understand intelligence, certainly not enough to build one.

      If we want to take from definitions, here is Merriam Webster

      (1)

      : the ability to learn or understand or to deal with new or trying >situations : reason

      also : the skilled use of reason

      (2)

      : the ability to apply knowledge to manipulate one’s >environment or to think abstractly as measured by objective >criteria (such as tests)

      The context stack is the closest thing we have to being able to retain and apply old info to newer context, the rest is in the name. Generative Pre-Trained language models, their given output is baked by a statiscial model finding similar text, also coined Stocastic parrots by some ML researchers, I find it to be a more fitting name. There’s also no doubt of their potential (and already practiced) utility, but a long shot of being able to be considered a person by law.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      8 months ago

      That statement of yours just means “we don’t yet know how it works hence it must work in the way I believe it works”, which is about the most illogical “statement” I’ve seen in a while (though this being the Internet, it hasn’t been all that long of a while).

      “It must be clever statistics” really doesn’t follow from “science doesn’t rigoroulsy define what it is”.

      • aname@lemmy.one
        link
        fedilink
        arrow-up
        5
        ·
        8 months ago

        Yes, corrected.

        But my point stads: claiming there is no intelligence in AI models without even knowing what “real” intelligence is, is wrong.

        • Aceticon@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          8 months ago

          I think the point is more that the word “intelligence” as used in common speech is very vague.

          I suppose a lot of people (certainly I do it and I expect many others do it too) will use the word “intelligence” in a general non-science setting in place of “rationalization” or “reasoning” which would be clearer terms but less well understood.

          LLMs easilly produce output which is not logical, and a rational being can spot it as not following rationality (even of we don’t understand why we can do logic, we can understand logic or the absence of it).

          That said, so do lots of people, which makes an interesting point about lots of people not being rational, which nearly dovetails with your point about intelligence.

          I would say the problem is trying to defined “inteligence” as something that includes all humans in all settings when clearly humans are perfectly capable of producing irrational shit whilst thinking of themselves as being highly intelligent whilst doing so.

          I’m not sure if that’s quite the point you were bringing up, but it’s a pretty interesting one.