• 100years@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      Wow, solid wiki article! It’s very hard to say anything on the subject that hasn’t been said.

      I didn’t see the simple phrasing:

      “What if the human brain is a Chinese Room?”

      but that seems to fall under eliminative materialism replies.

      Part of the Chinese Room program (both in our heads and in an AI) could be dedicated to creating the experience of consciousness.

      Searle has no substantial logical reply to this criticism. He openly takes it on faith that humans have consciousness, which is funny because an AI could say the same thing.

      • FlowVoid@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        The whole point of the Chinese room is that it doesn’t need anything “dedicated to creating the experience of consciousness”. It can pass the Turing test perfectly well without such a component. Therefore passing the Turing test - or any similar test based solely on algorithmic output - is not the same as possessing consciousness.

          • FlowVoid@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            “The room understands” is a common counterargument, and it was addressed by Searle by proposing that a person memorize the contents of the book.

            And the room passes the Turing test, that does not mean that “it passes all the tests we can throw at it”. Here is one test that it would fail: it contains various components that respond to the word “red”, but it does not contain any components that exclusively respond to any use of the word “red”. This level of abstraction is part of what we mean by understanding. Internal representation matters.

              • FlowVoid@midwest.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                The human intuitive understanding works at a completely different level than the manual execution of mechanical rules.

                This is exactly Searle’s point. Whatever the room is doing, it is not the same as what humans do.

                If you accept that, then the rest is semantics. You can call what the room does “intelligent” or “understanding” if you want, but it is fundamentally different from “human intelligence” or “human understanding”.

                  • FlowVoid@midwest.social
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    All he has shown that the human+room-system is something different than just the human by itself.

                    It’s more than that. He says that all Turing machines are fundamentally the same as the Chinese room, and therefore no Turing machine will ever be capable of “human understanding”.

                    Alternately, if anyone ever builds a machine that can achieve “human understanding”, it will not be a Turing machine.