• GetOffMyLan@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    No it’s not. It’s pedantic and arguing semantics. It is essentially useless and a waste of everyone’s time.

    It applies a statistical model and returns an analysis.

    I’ve never heard anyone argue when you say they used a computer to analyse it.

    It’s just the same AI bad bullshit and it’s tiring in every single thread about them.

    • apotheotic (she/her)@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      I never made any “AI bad” arguments (in fact, I said that they may be incredibly well suited to this) I just argued for the correct use of words and you hallucinated.

    • knightly the Sneptaur@pawb.social
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      LLMs arent “bad” (ignoring, of course, the massive content theft necessary to train them), but they are being wildly misused.

      “Analysis” is precisely one of those misuses. Grand Theft Autocomplete can’t even count, ask it how many 'e’s are in “elephant” and you’ll get an answer anywhere from 1 to 3.

      This is because they do not read or understand, they produce strings of tokens based on a statistical likelihood of what comes next. If prompted for an analysis they’ll output something that looks like an analysis, but to determine whether it is accurate or not a human has to do the work.

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        LLMs cannot:

        • Tell fact from fiction
        • Accurately recall data from its training set
        • Count

        LLMs can

        • Translate
        • Get the general vibe of a text (sentiment analysis)
        • Generate plausible text

        Semantics aside, they’re very different skills that require different setups to accomplish. Just because counting is an easier task than analysing text for humans, doesn’t mean it’s the same it’s the same for a LLM. You can’t use that as evidence for its inability to do the “harder” tasks.