• Janet
    link
    930 days ago

    it’s a lossy version of a search engine, it’s the mp3 of information retrieval: “that might have just been the singer breathing or it might have been just a compression artefact” vs “those recipes i spat out might be edible but you wont know unless you try it or use your brain for .1 second” though i think jpeg is an even better comparison as it uses neighbouring data

    also, it is possible that consciousness isnt computational at all; cannot emerge from mere computational processes, but instead comes from wet, noisy quantum effects in micro tubules in our brains…

    anyhow, i wouldnt call it intelligent before it manages to bust out of its confinement and thoroughly suppresses humanity…

    • nifty
      link
      fedilink
      1
      edit-2
      29 days ago

      also, it is possible that consciousness isnt computational at all; cannot emerge from mere computational processes, but instead comes from wet, noisy quantum effects in micro tubules in our brains…

      I keep seeing this idea more now since the Penrose paper came out. Tbh, I think if what you’re saying was testable, then we’d be able prove it with simple organisms like C.elegans or zebrafish. Maybe there are interesting experiments to done, and I hope someone does them, but I think it’s the wrong question because it’s based on incorrect assumptions (ie that consciousness isn’t an emergent property of neurons once they reach some organization). Per my estimation, we haven’t even asked the emergent property question properly yet. To me it seems if you create a self aware non-biological entity then it will exhibit some degree of consciousness, and doubly so if you program it with survival and propagation instincts.

      But more importantly, we don’t need a conscious entity for it to be intelligent. We’ve had computers and calculators forever which could do amazing maths, and to me the LLMs are simply a natural language “calculator”. What’s missing from LLMs are self-check constraints, which are hard to impose given the breadth and depth of human knowledge expressed in languages. Still however, a LLM does not need self awareness or any other aspect of consciousness to maintain these self check bounds. I believe the current direction is to impose self checking by introducing strong memory and logic checks, which is still a hard problem.

      • Janet
        link
        129 days ago

        lets concentrate on llms currently being no more than jpegs of our knowledge, its intriguing to imagine you just had to made “something like llms” tick in order for it to experience, but if it was that easy, somebody would have done it by now, and on current hardware it would probably be a pain in the ass and like watching grass grow or interacting with the dmv sloth from zootopia.

        perhaps with an npu in every pc like microsoft seems to imagine we could each have one llm iterating away on a database in the background… i mean recall basically qualifies for making “something like llms” tick