Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this, and happy new year in advance.)
I have landed on a “you can get fucked if you make this annoying for me, I don’t need your product anyway” response to everything. The silver lining is that I will be dealing with way more bullshit while being just as angry all the time at everything.
Hopefully 2025 will be a nice normal year–
Cybertruck outside of Trump hotel explodes violently and no once can figure out if it was a bomb or just Cybertruck engineering
Huh. I guess it’ll be another weird one.
(I know I know, low effort post, I’m sick in bed and bored)
Hey, at least there’s no way the Elon simps can spin that, right?
Never mind.
They are also spinning it into “the car is so great you cant do terrorism with it due to how strong it is”, which considering the several vehicle terrorism acts recently seems very unwise.
Also ‘it would be different for the bystanders’ i think you can see on the explosion vid there were not that many bystanders (which makes terrorism a bit less likely) and still 7 people were hurt (and the driver died). Id wait a bit with drawing further conclusions.
chalk it down to perp incompetence. single direct hit with old 155mm shell (7kg explosive) can destroy a normal modern tank, nevermind a car. no amount of shitty panels would contain anything at least mildly substantial. there were cases of suicide vests with bigger charge than that (10kg) https://www.bbc.com/news/world-asia-66355032
i think you can see on the explosion vid there were not that many bystanders (which makes terrorism a bit less likely)
symbolic building (??) still makes sense as a target for terrorist attack
Sure but id expect the perp to first use the cybertruck to ram into the building, or at least move closer, and not park nicely, otoh, if he was a terrorists what do I know, dont exactly know what goes through their mind shortly before things at high speeds go through their mind.
parking like this raises less suspicion. maybe he wasn’t sure enough about whatever igniting mechanism he had, he could end up stuck in a wall unable to get out to look it up
instead of high speed disassembly dude just burned down in automatically locked death trap, i guess he found that anticlimatic. not like isis (guessing) recruits brightest minds out there
Don’t worry about the low effort post, even the writers of 2025 are phoning it in.
this isn’t surprising at all, but some of the details are interesting: Server found in apartment funded by Russian government used AI to interfere with 2024 US elections
LLMs really are designed for this kind of thing, aren’t they?
hoping for a 2025 with solidarity, aid, and good opsec for everyone who needs it the most
as an amuse bouche for the horrors that will follow this year, please enjoy this lobste.rs reaching the melting down end stage after going full Karen at someone who agrees with a submitted post saying LLMs are a dead end when it comes to AI.
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_tefto4
Thankfully, accusing someone of being a crapto promoter is seen as an attack that is beyond the pale.
Highlights from the rest of the thread include bemoaning the lack of a downvote button for registering disapproval:
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_ft9mpj
unilaterally deciding to reply multiple times to one comment, neccesitating them to add a meta comment with hyperlinks
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_jjk5ei
And of course is a MoreWronger (moroner?)
If you go over to LessWrong, you can get some ideas of what is possible
I just got a hit of esprit d’escalier, and wished I’d replied to this
But the road to Hackers News is paved with good intentions.
with
So too is the road to Roko’s Basilisk.
one day i’ll finally catch a lobste permaban thanks to your links :-)
Lol of course they think they are civil and other people as pushing nasty rethoric. Quite the sealion feeling.
Wonder if they even notice how much communication weirdness they themself used. With the emphasis of emotional laden language. (They didnt use bold so i cant call it crank capitalization, but more crank cursive. A big deal for me! ;) )
Anyway the questioning of “how do you know this is why there is no downvoting” shows the type of person they are. (And is quite the Rationalist annoying behavior, suddenly they demand excessive sourcing for small remarks of people they disagree with).
“Do you want to refine your claim?”
Fellas, I was promised the first catastrophic AI event in 2024 by the chief doomers. There’s only a few hours left to go, I’m thinking skynet is hiding inside the times square orb. Stay vigilant!
I’m sad to report that the catastrophic AI event already happened and it was this picture
mind horrors beyond your comprehension
Ow god it is 2025 in .nl, it is coming! Everything is exploding, ai is turning us into fireworks! Yud was right!!1!!one!!
Comment sections on awful.systems are similar to this Drew Gooden sketch sometimes:
It’s just hard for me to give MY input when I don’t even know what’s going on
Once a month or so Awful Systems casually mentions a racist in some sub-sub-culture who I had never heard about before and then I get to spend an hour doing background research on obscure net drama from 2013 or whatever.
If you stick around and do a bunch of research you will end up better informed and much unhappier.
Oh no I’m in this sketch and I don’t like it. Or at least, I would be. The secret is to acknowledge your lack of background knowledge or basic grounding in what you’re talking about and then blunder forward based on vibes and values, trusting that if you’re too far off base on the details you’ll piss off someone (sorry skillissuer) enough to correct you.
I’m making a mental note to keep that link around for the next time someone barges into one of our threads and does the “I don’t know what this is, here’s my reaction to what I thought the topic was, no I didn’t read the article or lurk” routine
as a bonus they might accidentally watch the rest of the video and finally figure out how much AI sucks
“I don’t know what this is, here’s my reaction to what I thought the topic was, no I didn’t read the article or lurk”
bizarre that they actually just say this
You know guys, it’s really hard for me to give MY input when you are so negative about all the terrible things I like. Next time you guys come CRAWLING to me for advice, try not hating me as a human being for everything my twisted value system represents.
https://xcancel.com/altryne/status/1872090523420229780#m
The whole thread is terrible; controlling and borderline abusive behavior.
Found a couple QRTs cooking the guy which caught my attention:
https://twitter.com/denimneverdies/status/1872364569743786286
https://twitter.com/TheWapplehouse/status/1873915404529406462
I feel personally attacked because I have a BELOVED dino plush that looks almost exactly like that one, only is, you know, a fucking plush toy not an eldritch horror. They took a perfectly fine toy and ruined it with a stupid chatbot, the girl did the smartest thing and just uses it as a normal plushy.
Also if you listen to the video at the end you can really easily figure out why kids don’t like that toy, IT’S FUCKING ANNOYING. Kids don’t want to deal with your bullshit and fortunately they don’t yet know how to pretend to care.
“In the meantime, would you like to play a game or maybe hear a fun fact?”
“No.”
“That’s okay! Is there something else you would like to do or talk about? I’m here to chat about anything you like!”It’s like a deliberately written comedy scene of a character who can’t pick up on social cues.
The video is hilarious. The idiot AI man is so gpt-pilled he cannot figure out that this thing is just bloody annoying!!
Teaching the girl how to deadpan ignore annoying guys in her DMs for the rest of her life, I mean, valuable skill
“…according to my machine learning model we actually have a strong fit in favor of shooting at CEOs. There’s a 66% chance that each shot will either jam or fail to hit anything fatal, which creates a strong Bayesian prior in favor, or at least merits collecting further data to scale our models”
“What do you mean I’ve defined the problem in order to get the desired result? Machine learning process said we’re good. Why do you hate the future?”
Surprised this hasn’t been mentioned yet: https://www.rollingstone.com/culture/culture-news/meta-ai-users-facebook-instagram-1235221430/
Facebook and Instagram to add AI users. I’m sure that’s what everyone has been begging for…
Spam bots are good now!
I think it did come up a few weeks back, but it’s indeed a hilarious mess. the engagement must flow!
In my dreams, it won’t take long until all user interactions are AI driven and people paying for ad space in that shit realizes that, leading to an immediate crash of meta’s finances.
oh, typical techdirt eu-bashing, this time again because we have regulations.
(i wouldn’t be surprised if they’re conflating regulations with panic on purpose and packing valid criticism of llms and image plagiarism generators with the ridiculous tescreal screeds just to discredit the former; masnick’s primary stance was always extreme tech libertarianism and american exceptionalism, and the whole publication follows this)
While a good description of how AI Doom has progressed during 2024, I think the connection to regulation (at least the EU regulation, I am not familiar with what was proposed in California) is of the mark.
The EU regulation isn’t aimed at AI Doom, it’s aimed at banning and regulating real world practices. Think personal data, not AI going conscious.
I think that’s something to keep an eye on. The existence of the AI doom cult does not preclude there being good-faith regulations that can significantly reduce these people’s ability and incentives to do harm. Indeed the technology is so expensive and ineffective that if we can find a “reasonable compromise” plan to curb the most blatant kinds of abuse and exploitation we could easily see the whole misbegotten enterprise wither on the vine.
An interesting thing came through the arXiv-o-tube this evening: “The Illusion-Illusion: Vision Language Models See Illusions Where There are None”.
Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something “really is” and how something “appears to be”, and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.
It’s definitely linked in with the problem we have with LLMs where they detect the context surrounding a common puzzle rather than actually doing any logical analysis. In the image case I’d be very curious to see the control experiment where you ask “which of these two lines is bigger?” and then feed it a photograph of a dog rather than two lines of any length. I’m reminded of how it was (is?)easy to trick chatGPT into nonsensical solutions to any situation involving crossing a river because it pattern-matched to the chicken/fox/grain puzzle rather than considering the actual facts being presented.
Also now that I type it out I think there’s a framing issue with that entire illusion since the question presumes that one of the two is bigger. But that’s neither here nor there.
I think there’s a framing issue with that entire illusion since the question presumes that one of the two is bigger
I disagree, or rather I think that’s actually a feature; “neither” is a perfectly reasonable answer to that question that a human being would give, and LLMs would be fucked by since they basically never go against the prompt.