This author has independently rediscovered a slice of what’s known as the simulators viewpoint: the opinion that a large-enough language model primarily learns to simulate scenarios. The earliest source that lays out all of the ingredients, which you may want to not click if you’re allergic to LW-style writing or bertology, is a 2022 rationalist rant called Simulators. I’ve summarized it before on Stack Exchange; roughly, LLMs are not agents, oracles, genies, or tools; but general-purpose simulators which simulate conversations that agents, oracles, genies, or tools might have.
Something about this topic is memetically repulsive. Consider previously, on Lobsters. Or more gently, consider the recent post on a non-anthropomorphic view of LLMs, which is also in the simulators viewpoint, discussed previously, on Lobsters and previously, on Awful. Aside from scratching the surface of the math to see whether it works, folks seem to not actually be able to dig into the substance, and I don’t understand why not. At least here the author has a partial explanation:
When we personify AI, we mistakenly make it a competitor in our status games. That’s why we’ve been arguing about artificial intelligence like it’s a new kid in school: is she cool? Is she smart? Does she have a crush on me? The better AIs have gotten, the more status-anxious we’ve become. If these things are like people, then we gotta know: are we better or worse than them? Will they be our masters, our rivals, or our slaves? Is their art finer, their short stories tighter, their insights sharper than ours? If so, there’s only one logical end: ultimately, we must either kill them or worship them.
If we take the simulators viewpoint seriously then the ELIZA effect becomes a more serious problem for society in the sense that many people would prefer to experience a simulation of idealized reality than reality itself. Hyperreality is one way to look at this; another is supernormal stimulus, and I’ve previously explained my System 3 thoughts on this as well.
There’s also a section of the Gervais Principle on status illegibility; when a person fails to recognize a chatbot as a computer, they become likely to give them bogus legibility-oriented status, and because the depth of any conversation is limited by the depth of the shallowest conversant, they will put the chatbot on a throne, pedestal, or therapist’s recliner above themselves. Symmetrically, perhaps folks do not want to comment because they have already put the chatbot into the lowest tier of social status and do not want to reflect on anything that might shift that value judgement by making its inner reasoning more legible.
I’m now remembering a minor part of the major plot point in Illuminatus! concerning the fnords. The idea was that normies are memetically influenced by “fnord” but the Discordians are too sophisticated for that. Discordian lore is that “fnord” is actually code for a real English word, but which one? Traditionally it’s “Communism” or “socialism”, but that’s two options. So, rather than GMA, what if there’s merely multiple different fnords set up by multiple different groups with overlapping-yet-distinct interests? Then the relevant phenomenon isn’t the forgetting and emotional reactions associated with each fnord, but the fnordability of a typical human. By analogy with gullibility (believing what you hear because of how it’s spoken) and suggestibility (doing what you’re told because of how it’s phrased), fnordability might be accepting what you read because of the presence of specific codewords.