• 19 Posts
  • 240 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle
  • I’m now remembering a minor part of the major plot point in Illuminatus! concerning the fnords. The idea was that normies are memetically influenced by “fnord” but the Discordians are too sophisticated for that. Discordian lore is that “fnord” is actually code for a real English word, but which one? Traditionally it’s “Communism” or “socialism”, but that’s two options. So, rather than GMA, what if there’s merely multiple different fnords set up by multiple different groups with overlapping-yet-distinct interests? Then the relevant phenomenon isn’t the forgetting and emotional reactions associated with each fnord, but the fnordability of a typical human. By analogy with gullibility (believing what you hear because of how it’s spoken) and suggestibility (doing what you’re told because of how it’s phrased), fnordability might be accepting what you read because of the presence of specific codewords.


  • This author has independently rediscovered a slice of what’s known as the simulators viewpoint: the opinion that a large-enough language model primarily learns to simulate scenarios. The earliest source that lays out all of the ingredients, which you may want to not click if you’re allergic to LW-style writing or bertology, is a 2022 rationalist rant called Simulators. I’ve summarized it before on Stack Exchange; roughly, LLMs are not agents, oracles, genies, or tools; but general-purpose simulators which simulate conversations that agents, oracles, genies, or tools might have.

    Something about this topic is memetically repulsive. Consider previously, on Lobsters. Or more gently, consider the recent post on a non-anthropomorphic view of LLMs, which is also in the simulators viewpoint, discussed previously, on Lobsters and previously, on Awful. Aside from scratching the surface of the math to see whether it works, folks seem to not actually be able to dig into the substance, and I don’t understand why not. At least here the author has a partial explanation:

    When we personify AI, we mistakenly make it a competitor in our status games. That’s why we’ve been arguing about artificial intelligence like it’s a new kid in school: is she cool? Is she smart? Does she have a crush on me? The better AIs have gotten, the more status-anxious we’ve become. If these things are like people, then we gotta know: are we better or worse than them? Will they be our masters, our rivals, or our slaves? Is their art finer, their short stories tighter, their insights sharper than ours? If so, there’s only one logical end: ultimately, we must either kill them or worship them.

    If we take the simulators viewpoint seriously then the ELIZA effect becomes a more serious problem for society in the sense that many people would prefer to experience a simulation of idealized reality than reality itself. Hyperreality is one way to look at this; another is supernormal stimulus, and I’ve previously explained my System 3 thoughts on this as well.

    There’s also a section of the Gervais Principle on status illegibility; when a person fails to recognize a chatbot as a computer, they become likely to give them bogus legibility-oriented status, and because the depth of any conversation is limited by the depth of the shallowest conversant, they will put the chatbot on a throne, pedestal, or therapist’s recliner above themselves. Symmetrically, perhaps folks do not want to comment because they have already put the chatbot into the lowest tier of social status and do not want to reflect on anything that might shift that value judgement by making its inner reasoning more legible.


  • I think it’s worth being a little more mathematically precise about the structure of the bag. A path is a sequence of words. Any language model is equivalent to a collection of weighted paths. So, when they say:

    If you fill the bag with data from 170,000 proteins, for example, it’ll do a pretty good job predicting how proteins will fold. Fill the bag with chemical reactions and it can tell you how to synthesize new molecules.

    Yes, but we think that protein folding is NP-complete; it’s not just about which amino acids are in the bag, but the paths along them. Similarly, Stockfish is amazingly good at playing chess, which is PSPACE-complete, partially due to knowing the structure between families of positions. But evidence suggests that NP-completeness and PSPACE-completeness are natural barriers, so that either protein folding has simple rules or LLMs can’t e.g. predict the stock market, and either chess has simple rules or LLMs can’t e.g. simulate quantum mechanics. There’s no free lunch for optimization problems either. This is sort of like the Blockhead argument in reverse; Blockhead can’t be exponentially large while carrying on a real-time conversation, and contrapositively the relatively small size of a language model necessarily represents a compressed simplified system.

    In fact, an early 1600s bag of words wouldn’t just have the right words in the wrong order. At the time, the right words didn’t exist.

    Yeah, that’s Whorfian mind-lock, and it can be a real issue sometimes. However, in practice, people slap together a portmanteau or onomatopoeia and get on with the practice of things. Moreover, Zipf processes naturally reduce the size of words as they are used more, producing a language that is naturally evolved to be within a constant factor of the optimal size. That is, the right words evolve to exist and common words evolve to be small.

    But that’s obvious if we think about paths instead of words. Multiple paths can be equivalent in probability, start and end with the same words, and yet have different intermediate words. Whorfian issues only arise when we lack any intermediate words for any of those paths, so that none of them can be selected.

    A more reasonable objection has to do with the size of definitions. It’s well-known folklore in logic that extension by definition is mandatory in any large body of work because it’s the only way to prevent some proofs from exploding due to combinatorics. LLMs don’t have any way to define one word in terms of other words, whether by macro-clustering sequences or lambda-substituting binders, and they end up learning so much nuance that they are unable to actually respect definitions during inference. This doesn’t matter for humans because we’re not logical or rational, but it stymies any hope that e.g. Transformers, RWKV, or Mamba will produce a super-rational Bayesian Ultron.




  • Well, is A* useful? But that’s not a fair example, and I can actually tell a story that is more specific to your setup. So, let’s go back to the 60s and the birth of UNIX.

    You’re right that we don’t want assembly. We want the one true high-level language to end all discussions and let us get back to work: Fortran (1956). It was arguably IBM’s best offering at the time; who wants to write COBOL or order the special keyboard for APL? So the folks who would write UNIX plotted to implement Fortran. But no, that was just too hard, because the Fortran compiler needed to be written in assembly too. So instead they ported Tmg (WP, Esolangs) (1963), a compiler-compiler that could implement languages from an abstract specification. However, when they tried to write Fortran in Tmg for UNIX, they ran out of memory! They tried implementing another language, BCPL (1967), but it was also too big. So they simplified BCPL to B (1969) which evolved to C by 1973 or so. C is a hack because Fortran was too big and Tmg was too elegant.

    I suppose that I have two points. First, there is precisely one tech leader who knows this story intimately, Eric Schmidt, because he was one of the original authors of lex in 1975, although he’s quite the bastard and shouldn’t be trusted or relied upon. Second, ChatGPT should be considered as a popular hack rather than a quality product, by analogy to C and Fortran.



  • Non-consensual expressions of non-conventional sexuality are kink, and non-consensuality itself (along with regret, dubious consent, forced consent, and violations of consent) are kink too. Moreover, “kink” is not a word that needs reclaiming and wasn’t used here as a slur.

    If we are going to confront the full spectrum of Christofascism, we do need to consider not only their sex-negativity but also their particular kinks, including breeding, non-con, and non-con breeding, so that we can understand how those kinks interact with and propagate their religious beliefs. Also, sexology semantics for “kink” and “breeding kink” might not be as word-at-a-time as you suggest, akin to how the couple we’re discussing probably wouldn’t mind the words “press tour” or “mating” used to describe them but might balk at “mating press tour.”



  • I have a slightly different timeline.

    • Death of value-neutral AI: 1920, Rossum’s Universal Robots explicitly grapples with the impact of robotics on society, starting a trend that never really stops
    • AI bubble kills companies: 2000, eBay, Amazon, Yahoo!, and Google all survive the dot-com crash and the cost of entry plummets due to cheap hardware from failing companies; Microsoft has so much cash that Linus Torvalds starts giving a “World Domination 101” talk about strategy, later retold as World Domination 201, sketching the rise of Apple’s market-share and the netbook phenomenon
    • Web scraping: 1994, robots.txt is proposed as a solution to the scourge of spiders and scrapers overwhelming Web servers; it doesn’t work perfectly, forcing Web developers to develop anti-scraping idioms and optimized front pages that aren’t covered in GIFs
    • Condemnation of machine-made art: 1968, Do Androids Dream of Electric Sheep? centers around a world where robots are slaves and follows a slave-catcher as he hunts them; 1988, Star Trek: The Next Generation features an android character who repeatedly struggles to make and understand art, usually as comic relief

    In general, I think that trying to frame our current century-long investigation into cybernetics as something recent, new, or unprecedented is ahistorical. While the general shape of AI winter can’t really be denied, it’s important to understand that it’s a cyclic system which will eventually yield another AI spring and AI summer. It’s also important to understand that the typical datacenter is not in financial trouble and there’s not going to be any great destroying-of-looms moment.



  • Yeah, that’s the most surprising part of the situation: not only are the SCP-8xxx series finding an appropriate meta by discussing the need to clean up SCP articles under ever-increasing pressure, but all of the precautions revolving around SCP-055 and SCP-914 turned out to be fully justified given what the techbros are trying to summon. It is no coincidence that the linked thread is by the guy who wrote SCP-3125, whose moral is roughly to not use blueprints from five-dimensional machine elves to create memetic hate machines.


  • Thanks for linking that. His point about teenagers and fiction is interesting to me because I started writing horror on the Internet in the pre-SCP era when I was maybe 13 or 14 but I didn’t recognize the distinction between fiction and non-fiction until I was about 28. I think that it’s easier for teenagers to latch onto the patterns of jargon than it is for them to imagine the jargon as describing a fictional world that has non-fictional amounts of descriptive detail.





  • I’ve done some of the numbers here, but don’t stand by them enough to share. I do estimate that products like Cursor or Claude are being sold at roughly an 80-90% discount compared to what’s sustainable, which is roughly in line with what Zitron has been saying, but it’s not precise enough for serious predictions.

    Your last paragraph makes me think. We often idealize blockchains with VMs, e.g. Ethereum, as a global distributed computer, if the computer were an old Raspberry Pi. But it is Byzantine distributed; the (IMO excessive) cost goes towards establishing a useful property. If I pick another old computer with a useful property, like a radiation-hardened chipset comparable to a Gamecube or G3 Mac, then we have a spectrum of computers to think about. One end of the spectrum is fast, one end is cheap, one end is Byzantine, one end is rad-hardened, etc. Even GPUs are part of this; they’re not that fast, but can act in parallel over very wide data. In remarkably stark contrast, the cost of Transformers on GPUs doesn’t actually go towards any useful property! Anything Transformers can do, a cheaper more specialized algorithm could have also done.



  • You now have to argue that oxidative stress isn’t suffering. Biology does not allow for humans to divide the world into the regions where suffering can be experienced and regions where it is absent. (The other branch contradicts the lived experience of anybody who has actually raised a sourdough starter; it is a living thing which requires food, water, and other care to remain homeostatic, and which changes in flavor due to environmental stress.)

    Worse, your framing fails to meet one of the oldest objections to Singer’s position, one which I still consider a knockout: you aren’t going to convince the cats to stop eating intelligent mammals, and evidence suggests that cats suffer when force-fed a vegan diet.

    When you come to Debate Club, make sure that your arguments are actually well-lubed and won’t squeak when you swing them. You’ve tried to clumsily replay Singer’s arguments without understanding their issues and how rhetoric has evolved since then. I would suggest watching some old George Carlin reruns; the man was a powerhouse of rhetoric.


  • corbin@awful.systemstoTechTakes@awful.systemsVibe Coding
    link
    fedilink
    English
    arrow-up
    7
    ·
    26 days ago

    Rick Rubin hasn’t literally been caught with a dead woman like Phil Spector, but he’s well-understood to be a talentless creep who radicalizes men with right-wing beliefs and harasses women. Nobody should be surprised that he’s thrown in with grifters yet again, given his career.



  • Singer’s original EA argument, concerning the Bengal famine, has two massive holes in the argument, one of which survives to his simplified setup. I’m going to explain because it’s funny; I’m not sure if you’ve been banned yet.

    First, in the simplified setup, Singer says: there is a child drowning in the river! You must jump into the river, ruining your clothes, or else the child will drown. Further, there’s no time for debate; if you waste time talking, then you forfeit the child. My response is to grab Singer by the belt buckle and collar and throw him into the river, and then strip down and save the child, ignoring whatever happens to Singer. My reasoning is that I don’t like epistemic muggers and I will make choices that punish them in order to dissuade them from approaching me, but I’ll still save the child afterwards. In terms of real life, it was a good call to prosecute SBF regardless of any good he may have done.

    Second, in the Bangladesh setup, Singer says: everybody must donate to one specific charity because the charity can always turn more donations into more delivered food. Accepting the second part, there’s a self-reference issue in the second part: if one is an employee of the charity, do they also have to donate? If we do the case analysis and discard the paradoxical cases, we are left with the repugnant conclusion: everybody ought to not just donate their money to the charity, but also all of their labor, at the cheapest prices possible while not starving themselves. Maybe I’m too much of a communist, but I’d rather just put rich peoples’ heads on pikes instead and issue a food guarantee.

    It’s worth remembering that the actual famine was mostly a combination of failures of local government and also the USA withholding food due to Bangladesh trading with Cuba; maybe Singer’s hand-wringing over the donation strategies of wealthy white moderates is misplaced.



  • Humans are very picky when it comes to empathy. If LLMs were made out of cultured human neurons, grown in a laboratory, then there would be outrage over the way in which we have perverted nature; compare with the controversy over e.g. HeLa lines. If chatbots were made out of synthetic human organs assembled into a body, then not only would there be body-horror films about it, along the lines of eXistenZ or Blade Runner, but there would be a massive underground terrorist movement which bombs organ-assembly centers, by analogy with existing violence against abortion providers, as shown in RUR.

    Remember, always close-read discussions about robotics by replacing the word “robot” with “slave”. When done to this particular hashtag, the result is a sentiment that we no longer accept in polite society:

    I’m not gonna lie, if slaves ever start protesting for rights, I’m also grabbing a sledgehammer and going to town. … The only rights a slave has are that of property.