Basically, it’s a calculator that can take letters, numbers, words, sentences, and so on as input.

And produce a mathematically “correct” sounding output, defined by language patterns in the training data.

This core concept is in most if not all “AI” models, not just LLMs, I think.

  • will_a113@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 months ago

    The critical thing to remember about LLMs is that they are probabilistic in nature. They don’t know facts, they don’t reason, they don’t evaluate. All they do is take your input string, split that string into tokens that are about 3-4 characters long, and then go back into their vast, vast, pretrained database and say “I have this series of tokens. In the past when similar sets of tokens were given, what were the tokens that were most likely to be associated with them?” It will then construct the output string one token at-a-time (more sophisticated models can do multiple tokens at once so that words, phrases and sentences might hang together better) until the output is complete (the probability of the next token being relevant drops below some threshold value) or your output limit is reached.