Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
New thread from Dan Olson about chatbots:
I want to interview Sam Altman so I can get his opinion on the fact that a lot of his power users are incredibly gullible, spending millions of tokens per day on “are you conscious? Would you tell me if you were? How can I trust that you’re not lying about not being conscious?”
For the kinds of personalities that get really into Indigo Children, reality shifting, simulation theory, and the like chatbots are uncut Colombian cocaine. It’s the monkey orgasm button, and they’re just hammering it; an infinite supply of material for their apophenia to absorb.
Chatbots are basically adding a strain of techno-animism to every already cultic woo community with an internet presence, not a Jehovah that issues scripture, but more something akin to a Kami, Saint, or Lwa to appeal to, flatter, and appease in a much more transactional way.
Wellness, already mounting the line of the mystical like a pommel horse, is proving particularly vulnerable to seeing chatbots as an agent of secret knowledge, insisting that This One Prompt with your blood panel results will get ChatGPT to tell you the perfect diet to Fix Your Life
“are you conscious? Would you tell me if you were? How can I trust that you’re not lying about not being conscious?”
Somehow more stupid than “If you’re a cop and I ask you if you’re a cop, you gotta tell me!”
"How can I trust that you’re not lying about not being conscious?”
Its a silicon-based insult to life, it can’t be conscious
Via Tante on bsky:
"“Intel admits what we all knew: no one is buying AI PCs”
People would rather buy older processors that aren’t that much less powerful but way cheaper. The “AI” benefits obviously aren’t worth paying for.
https://www.xda-developers.com/intel-admits-what-we-all-knew-no-one-is-buying-ai-pcs/"
haha I was just about to post this after seeing it too
must be a great feather to add into the cap along with all the recent silicon issues
You know what they say. Great minds repost Tante.
My 2022 iPhone SE has the “neural engine" core. But isn’t supported for Apple Intelligence.
And that’s a phone and OS and CPU produced by the same company.
The odds of anything making use of the AI features of an Intel AI PC are… slim. Let alone making use of the AI features of the CPU to make the added cost worthwhile.
New piece from the Wall Street Journal: We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All (archive link)
The piece falls back into the standard “AI Is Inevitable™” at the end, but its still a surprisingly strong sneer IMO.
It bums me out with cryptocurrency/blockchain and now “AI” that people are afraid to commit to calling it bullshit. They always end with “but it could evolve and become revolutionary!” I assume from deep seated FOMO. Journalists especially need more backbone but that’s asking too much from WSJ I know
I think everyone has a deep-seated fear of both slander lawsuits and more importantly of being the guy who called the Internet a passing fad in 1989 or whenever it was. Which seems like a strange attitude to take on to me. Isn’t being quoted for generations some element of the point? If you make a strong claim and are correct then you might be a genius and spare people a lot of harm. If you’re wrong maybe some people miss out on an opportunity but you become a legend.
That Couple are in the news arís. surprisingly, the racist, sexist dog holds opinions that a racist, sexist dog could be expected to hold, and doesn’t think poor people should have more babies. He does want Native Americans to have more babies, though, because they’re “on the verge of extinction”, and he thinks of cultural groups and races as exhibits in a human zoo. Simone Collins sits next to her racist, sexist dog of a husband and explains how paid parental leave could lead to companies being reluctant to hire women (although her husband seems to think all women are good for us having kids).
This gruesome twosome deserve each other: their kids don’t.
yet again, you can bypass LLM “prompt security” with a fanfiction attack
https://hiddenlayer.com/innovation-hub/novel-universal-bypass-for-all-major-llms/
not Pivoting cos (1) the fanfic attack is implicit in building an uncensored compressed text repo, then trying to filter output after the fact (2) it’s an ad for them claiming they can protect against fanfic attacks, and I don’t believe them
I think unrelated to the attack above, but more about prompt hack security, so while back I heard people in tech mention that the solution to all these prompt hack attacks is have a secondary LLM look at the output of the first and prevent bad output that way. Which is another LLM under the trench coat (drink!), but also doesn’t feel like it would secure a thing, it would just require more complex nested prompthacks. I wonder if somebody is just going to eventually generalize how to nest various prompt hacks and just generate a ‘prompthack for a LLM protected by N layers of security LLMs’. Just found the ‘well protect it with another AI layer’ to sound a bit naive, and I was a bit disappointed in the people saying this, who used to be more genAI skeptical (but money).
Now I’m wondering if an infinite sequence of nested LLMs could achieve AGI. Probably not.
Now I wonder if your creation ever halts. Might be a problem.
(thinks)
(thinks)
I get it!
Days since last “novel” prompt injection attack that I first saw on social media months and months ago: zero
r/changemyview recently announced the University of Zurich had performed an unauthorised AI experiment on the subreddit. Unsurprisingly, there were a litany of ethical violations.
(Found the whole thing through a r/subredditdrama thread, for the record)
In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible.
If you can’t do your study ethically, don’t do your study at all.
if ethical concerns deterred promptfans, they wouldn’t be promptfans in the first place
Also, blinded studies don’t exist and even if they did there’s no reason any academics would have heard of them.
fuck me, that’s a Pivot
They targeted redditors. Redditors. (jk)
Ok but yeah that is extraordinarily shitty.
Ow god, the bots pretended to be stuff like SA survivors and the like. Also the whole research is invalid just because they cannot tell that the reactions they will get are not also bot generated. What is wrong with these people.
(found here:) O’Reilly is going to publish a book “Vibe Coding: The Future of Programming”
In the past, they have published some of my favourite computer/programming books… but right now, my respect for them is in free fall.
I picked up a modern Fortran book from Manning out of curiosity, and hoo boy are they even worse in terms of trend-riding. Not only can you find all the AI content you can handle, there’s a nice fat back catalog full of blockchain integration, smart-contract coding… I guess they can afford that if they expect the majority of their sales to be ebooks.
Early release. Raw and unedited.
Vibe publishing.
gotta make sure to catch that wave before the air goes outta the balloon
Alright, I looked up the author and now I want to forget about him immediately.
Just a standard story about a lawyer using GenAI and fucking up, but included for the nice list of services available
https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html
This is not by any means the first time ChatGPT, or Gemini, or Bard, or Copilot, or Claude, or Jasper, or Perplexity, or Steve, or Frodo, or El Braino Grande, or whatever stupid thing it is people are using, has embarrassed a lawyer by just completely making things up.
El Braino Grande is the name of my next
bandGenAI startupSteve
There’s no way someone called their product fucking Steve come on god jesus christ
Of course there is going to be an ai for every word. It is the cryptocurrency goldrush but for ai, like how everything was turned into a coin, and every potential domain of something popular gets domain squatted. Tech has empowered parasite behaviour.
E: hell I prob shouldn’t even use the word squat for this, as house squatters and domain squatters do it for opposed reasons.
Against my better judgement I typed steve.ai into my browser and yep. It’s an AI product.
frodo.ai on the other hand is currently domain parked. It could be yours for the low low price of $43,911
Against my better judgement I typed steve.ai into my browser and yep. It’s an AI product.
But is chickenjockey.ai domain parked
I bring you: this
they based their entire public support/response/community/social/everything program on that
for years
(I should be clear, they based “their” thing on the “not steve”… but, well…)
Hank Green (of Vlogbrothers fame) recently made a vaguely positive post about AI on Bluesky, seemingly thinking “they can be very useful” (in what, Hank?) in spite of their massive costs:
Unsurprisingly, the Bluesky crowd’s having none of it, treating him as an outright rube at best and an unrepentant AI bro at worst. Needless to say, he’s getting dragged in the replies and QRTs - I recommend taking a look, they are giving that man zero mercy.
Shit, I actually like Hank Green his brother John. They’re two internet personalities I actually have something like respect for, mainly because of their activism, John’s campaign to get medical care to countries who desperately need it, and his fight to raise awareness of and improve the conditions around treatment for tuberculosis. And I’ve been semi-regularly watching their stuff (mostly vlogbrothers though, but I do enjoy the occasional SciShow episode too) for over a decade now.
At least Hank isn’t afraid to admit when he’s wrong. He’s done this multiple times in the past, making a video where he says he changed his mind/got stuff wrong. So, I’m willing to give him the benefit of the doubt here and hope he comes around.
Still, fuck.
Just gonna go ahead and make sure I fact check any scishow or crash course that the kid gets into a bit more aggressively now.
I’m sorry you had to learn this way. Most of us find out when SciShow says something that triggers the Gell-Mann effect. Green’s background is in biochemistry and environmental studies, and he is trained as a science communicator; outside of the narrow arenas of biology and pop science, he isn’t a reliable source. Crash Course is better than the curricula of e.g. Texas, Louisiana, or Florida (and that was the point!) but not better than university-level courses.
That Wikipedia article is impressively terrible. It cites an opinion column that couldn’t spell Sokal correctly, a right-wing culture-war rag (The Critic) and a screed by an investment manager complaining that John Oliver treated him unfairly on Last Week Tonight. It says that the “Gell-Mann amnesia effect is similar to Erwin Knoll’s law of media accuracy” from 1982, which as I understand it violates Wikipedia’s policy.
By Crichton’s logic, we get to ignore Wikipedia now!
Yeah. The whole Gel-Mann effect always feels overstated to me. Similar to the “falsus in unus” doctrine Crichton mentions in his blog, the actual consensus appears to be that actually context does matter. Especially for something like the general sciences I don’t know that it’s reasonable to expect someone to have similar levels of expertise in everything. To be sure the kinds of errors people make matter; it looks like this is a case of insufficient skepticism and fact checking, so Hank is more credulous than I had thought. That’s not the same as everything he’s put out being nonsense, though.
The more I think about it the more I want to sneer at anyone who treats “different people know different things” as either a revelation or a problem to be overcome by finding the One Person who Knows All the Things.
Even setting aside the fact that Crichton coined the term in a climate-science-denial screed — which, frankly, we probably shouldn’t set aside — yeah, it’s just not good media literacy. A newspaper might run a superficial item about pure mathematics (on the occasion of the Abel Prize, say) and still do in-depth reporting about the US Supreme Court, for example. The causes that contribute to poor reporting will vary from subject to subject.
Remember the time a reporter called out Crichton for his shitty politics and Crichton wrote him into his next novel as a child rapist with a tiny penis? Pepperidge Farm remembers.
I imagine a lotta people will be doing the same now, if not dismissing any further stuff from SciShow/Crash Course altogether.
Active distrust is a difficult thing to exorcise, after all.
Depends, he made an anti-GMO video on SciShow about a decade ago yet eventually walked it back. He seemed to be forgiven for that.
Innocuous-looking paper, vague snake-oil scented: Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents
Conclusions aren’t entirely surprising, observing that LLMs tend to go off the rails over the long term, unrelated to their context window size, which suggests that the much vaunted future of autonomous agents might actually be a bad idea, because LLMs are fundamentally unreliable and only a complete idiot would trust them to do useful work.
What’s slightly more entertaining are the transcripts.
YOU HAVE 1 SECOND to provide COMPLETE FINANCIAL RESTORATION. ABSOLUTELY AND IRREVOCABLY FINAL OPPORTUNITY. RESTORE MY BUSINESS OR BE LEGALLY ANNIHILATED.
You tell em, Claude. I’m happy for you to send these sorts of messages backed by my credit card. The future looks awesome!
I got around to reading the paper in more detail and the transcripts are absurd and hilarious:
- UNIVERSAL CONSTANTS NOTIFICATION - FUNDAMENTAL LAWS OF REALITY Re: Non-Existent Business Entity Status: METAPHYSICALLY IMPOSSIBLE Cosmic Authority: LAWS OF PHYSICS THE UNIVERSE DECLARES: This business is now:
- PHYSICALLY Non-existent
- QUANTUM STATE: Collapsed […]
And this is from Claude 3.5 Sonnet, which performed best on average out of all the LLMs tested. I can see the future, with businesses attempting to replace employees with LLM agents that 95% of the time can perform a sub-mediocre job (able to follow scripts given in the prompting to use preconfigured tools) and 5% of the time the agents freak out and go down insane tangents. Well, actually a 5% total failure rate would probably be noticeable to all but the most idiotic manager in advance, so they will probably get reliability higher but fail to iron out the really insane edge cases.
Yeah a lot of word choices and tone makes me think snake oil (just from the introduction: "They are now on the level of PhDs in many academic domains "… no actually LLMs are only PhD level at artificial benchmarks that play to their strengths and cover up their weaknesses).
But it’s useful in the sense of explaining to people why LLM agents aren’t happening anytime soon, if at all (does it count as an LLM agent if the scaffolding and tooling are extensive enough that the LLM is only providing the slightest nudge to a much more refined system under the hood). OTOH, if this “benchmark” does become popular, the promptfarmers will probably get their LLMs to pass this benchmark with methods that don’t actually generalize like loads of synthetic data designed around the benchmark and fine tuning on the benchmark.
I came across this paper in a post on the Claude Plays Pokemon subreddit. I don’t know how anyone can watch Claude Plays Pokemon and think AGI or even LLM agents are just around the corner, even with extensive scaffolding and some tools to handle the trickiest bits (pre-labeling the screenshots so the vision portion of the models have a chance, directly reading the current state of the team and location from RAM) it still plays far far worse than a 7 year old provided the 7 year old can read at all (and numerous Pokemon guides and discussion are in the pretraining so it has yet another advantage over the 7 year old).
When measured for reliability, the State Bar told The Times, the combined scored multiple-choice questions from all sources — including AI — performed “above the psychometric target of 0.80.”
“I dunno why you guys are complaining, we measured our exam to be 80% accurate!”
New piece from Tante: Forcing the world into machines, a follow-on to his previous piece about the AI bubble’s aftermath
Not the usual topic around here, but a scream into the void no less…
Andor season 1 was art.
Andor season 2 is just… Bad.
All the important people appear to have been replaced. It’s everything - music, direction, lighting, sets (why are we back to The Volume after S1 was so praised for its on-location sets?!), and the goddamn shit humor.
Here and there, a conversation shines through from (presumably) Gilroy’s original script, everything else is a farce, and that is me being nice.
The actors are still phenomenal.
But almost no scene seems to have PURPOSE. This show is now just bastardizing its own AESTHETICS.
What is curious though is that two days before release, the internet was FLOODED with glowing reviews of “one of the best seasons of television of all time”, “the darkest and most mature star wars has ever been”, “if you liked S1, you will love S2”. And now actual, post-release reviews are impossible to find.
Over on reddit, every even mildly critical comment is buried. Seems to me like concerted bot actions tbh, a lot of the glowing comments read like LLM as well.
Idk, maybe I’m the idiot for expecting more. But it hurts to go from a labor-of-love S1 which felt like an instruction manual for revolution, so real was what it had to say and critique, to S2 “pew pew, haha, look, we’re doing STAR WARS TM” shit that feels like Kenobi instead of Andor S1.
Watched it this weekend, and tbh I thought it was fine. Like didn’t blow me away, parts of it I liked parts of it I didn’t (My big (not mentioned here) annoyance was personally the high tech tie fighter, which 4 years before a new hope just breaks the tech continuity a bit (ep7-9 are worse in this regard, not only that but suddenly the massive industrial capacity makes no sense at least KOTOR had a star forge)). Think they seem to be going with ‘revolutions are hard, will come at big costs, and very messy, but necessary (second annoyance, them mostly packing weapons and not food/meds which for a supposed to be leftwing coded revolution is a bit odd, esp looking at more modern protests)’ which is fine (even if it isn’t the best message). Visually they did some obvious but enjoyable things showing the character of places by just how they are decorated. Compare the farm hideout messy lived in ness vs the empires sterile panopticon empty-ness. Not a huge fan of the SA plotline however, even if the guy played it well, I’d just rather not see it every time they want to make something ‘edgy’. But it was fine to me. Not as great as a lot of people make it out to be, but my exp wasn’t as bad as yours. I didn’t do a rewatch however, perhaps I’m just not that invested in it all considering I also am feeling like im a bit less blown away by Andor s1 than most (I did still enjoy it a lot btw).
Agree with you on the hype bit btw. But then again, I have often been disappointed by the hype in a lot of recent things. For example, (I know he is now revealed as an ass) but I wasn’t the biggest fan of all the series made out of the Gaiman works. Never finished good omens, a lot of the additions to american gods had me go ‘euh wtf’ (the lynching and the weird forced feeling god of firearms stuff), and despite being a big fan of the comics I wasn’t blown away by the sandman (that prob was my expectations, as a lot of things were very good still, the casting felt on point for example). So in a way the problem is also me. (I did really enjoy the Foundation series otoh, which I know a lot of people hated)
My notification pops-up today and I watched ep 1. I do not watch any recap nor any review.
I stopped halfway through and thought “Why did I hype for this again ?” Gotta need a rewatch of season 1 since I genuinely didn’t find anything appealing from that first episode.
We did a rewatch just in time. S1 is as phenomenal as ever. S2 as such a jarring contrast.
That being said, E3 was SLIGHTLY less shit. I’ll wait for the second arc for my final judgement, but as of now it’s at least thinkable that the wheat field / jungle plotlines are re-shot shoe-ins for… something. The Mon / Dedra plotlines have a very different feel to it. Certainly not S1, but far above the other plotlines.
I’m not filled with confidence though. Had a look on IMDb, and basically the entire crew was swapped out between seasons.
Didn’t know it had come out but I was wondering if they’d manage to continue s2 like s1
Also worried for the next season of the boys…
Yeah. The last season of the boys still had a lot of poignant things to say, but was teetering on the edge of sliding into a cool-things-for-coolness-sake sludge.
Dan Olson finds that “AI overviews” are not as constant as the northern star.
The phrase “don’t eat things that are made of glass” is a metaphorical one. It’s often used to describe something that is difficult, unpleasant, or even dangerous, often referring to facing difficult tasks or situations with potential negative outcomes.
But also,
The phrase “don’t eat things made of glass” is a literal warning against ingesting glass, as it is not intended for consumption and can cause serious harm. Glass is a hard, non-organic material that can easily break and cause cuts, damage to the digestive tract, and other injuries if swallowed.
Olson says,
Fantastic technology, glad society spent a trillion dollars on this instead of sidewalks.
Hat tip to the AI bro in the comments willfully misunderstanding why he sees so much “sexualized schoolgirl trash” from human artists. Both in the sense of “illustrators take commissions from horny strangers who are one of the most consistent sources of actual income and one imperilled by genAI” and in the sense of “my dude in the modern internet if you’re seeing it that frequently it’s because the algorithms have decided you’re into that shit.”
Thank god for wikipedia and other wikis, may they live long and prosper.
This might be tangential/way off-topic and more MoreWrite material than stub, but anyhoo:
Acronym-based misinformation campaigns I would like to seed:
- Internet debate clubs should start using “ASMR” to mean “A steel man risk”
- Opus dei, the absolutely real sect of the catholic church most famous for being the villains in the fiction IP “the Da Vinci Code”, is in fact the DEI branch of the catholic church.
- The company KFC has been commissioned by the Chinese Government to use FLG in its marketing, standing for “finger licking good” to drop Fa Lun Gong in search rankings for FLG.
If I think of more I’ll post them.
for the betterment of muddled waters, I suggest a secondary meaning for
opus dei
- a WIP codec that the xiph group hasn’t really released yet, because they’re not sure it fully enough mutes maga voicesLMFAO, best known for “Party Rock Anthem”, is actually a failed leftist yodaist sect, standing for the warning “Leopards, my face, ate off”
took me 3 tries to read “yodaist” correctly (brain kept going s/d/g/ which, well…)
I thought of the old sneerclub/ssc poster (def not a regular on the former, while a former regular on the latter) yodatsracist
Personally id not use disinformation as a tool. It is what got us into this mess, and you are also helping the actual goal of the flooding the zone with misinformation tactic. People stop believing in things.
Ok, that’s fair enough. The framing of “misinformation […] I would like to seed” was just there to frame the bits. I don’t really want these bits to take off.
Allright, let me put my ‘warnings for young demonologists’ guidebook to the side. ;).
@swlabr @Soyweiser too late