• ඞmir@lemmy.ml
      link
      fedilink
      arrow-up
      21
      ·
      6 months ago

      That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”

      That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same

        • ඞmir@lemmy.ml
          link
          fedilink
          arrow-up
          10
          ·
          6 months ago

          Neural networks aren’t going anywhere because they can be genuinely useful, just not to solve every problem

            • FooBarrington@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              6 months ago

              And that somehow means we shouldn’t do OCR anymore, or image classification, or text to speech, or speech to text, or anomaly detection, or…?

              Neural networks are really good at pattern recognition, e.g. finding manufacturing defects in expensive products. Why throw all of this away?

    • BlueMagma@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.

    • MindTraveller@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 months ago

      Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana. You’re getting your understanding of computer science from movies and video games.