Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this…)

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    Serious question: what are people’s specific predictions for the coming VC bubble popping/crash/AI winter? (I’ve seen that prediction here before, and overall I agree, but I’m not sure about specifics…)

    For example… I’ve seen speculation that giving up on the massive training runs could free up compute and cause costs to drop which the more streamlined and pragmatic GenAI companies could use to pivot to providing their “services” at sustainable rates (and the price of GPUs would drop to the relief of gamers everywhere). Alternatively, maybe the bubble bursting screws up the GPU producers and cloud service providers as well and the costs on compute and GPUs don’t actually drop that much if any?

    Maybe the bubble bursting makes management stop pushing stuff like vibe coding… but maybe enough programmers have gotten into the habit of using LLMs for boilerplate that it doesn’t go away, and LLM tools and plugins persist to make code shittery.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      Linking this recent comment on an older thread because it was so relevant: https://awful.systems/comment/6966312

      TLDR; GPUs cost as much to operate as they normally depreciate over time, so even if the bubble pops people might be sitting on piles of GPUs without reselling or using them.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      I’ve repeated this prediction a bajillion times, but I suspect this bubble’s discredited the idea of artificial intelligence, and expect it to quickly die once this bubble bursts.

      Between the terabytes upon terabytes of digital mediocrity the slop-nami’s given us, LLMs’ countless and relentless failures in logic and reason, the large-scale enshittification of daily life their mere existence has enabled, and their power consumption singlehandedly accelerating the climate crisis, I feel that the public’s come to view computers as inherently incapable of humanlike cognition/creativity, no matter how many gigawatts they consume or oceans they boil.

      Expanding on this somewhat, I suspect AI as a concept will likely also come to be seen as an inherently fascist concept.

      With the current bubble’s link to esoteric fascism, the far-right’s open adoration of slop, basically everything about OpenAI’s Studio Ghibli slopgen, and God-knows-what-else, the public’s got plenty of reason to treat use or support of AI as a severe indictment of someone’s character in and of itself - a “tech asshole signifier”, to quote Baldur Bjarnason.

      And, of course, AI as a concept will probably come to be viewed as inherently anti-art/anti-artist as well - considering how badly the AI bubble’s shafted artists, and artists specifically, that kinda goes without saying.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        13 hours ago

        I have to agree. There are already at least two notable and high-profile failure stories with consequences that are going to stick around for years.

        1. The Israeli military’s use of “AI” targeting systems as an accountability sink in service of a predetermined policy of ethnic cleansing.
        2. The DOGE creeps wanting to rewrite bedrock federal payment systems with AI assistance.

        And sadly more to come. The first story is likely to continue to get a hands-off treatment in most US media for a few more years yet, but the second one is almost certainly going to generate Tacoma Narrows Bridge-level legends of failure and necessary restructuring once professionals are back in command. The kind of thing that is put into college engineering textbooks as a dire warning of what not to do.

        Of course, it’s up to us to keep these failures in the public spotlight and framed appropriately. The appropriate question is not, “how did the AI fail?” The appropriate question is, “how did someone abusively misapply stochastic algorithms?”

          • queermunist she/her@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            It’s for the IOF, not USAmericans.

            Doing genocide actually takes a toll on their minds no matter how much they profess to support it, so the chatbots allow them to offload their own guilt into the machine. So-called AI is an automated “just following orders” excuse generator.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        I think you are much more optimistic than me about the general public’s ability to intellectually understand fascism or think about copyright or give artists their appropriate credit. To most people that know about image gen, it’s a fun toy: throw in some words and rapidly get pictures. The most I hope for is that AI image generation becomes unacceptable to use in professional or serious settings and it is relegated to a similar status as clip art.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      I think we’re going to see an ongoing level of AI-enabled crapification for coding and especially for spam. I’m guessing there’s going to be enough money from the spam markets to support a level of continued development to keep up to date with new languages and whatever paradigms are in vogue, so vibe coding is probably going to stick around on some level, but I doubt we’re going to see major pushes.

      One thing that this has shown is how much of internet content “creation” and “communication” is done entirely for its own sake or to satisfy some kind of algorithm or metric. If nobody cares whether it actually gets read then it makes economic sense to automate the writing as much as possible, and apparently LLMs represent a “good enough” ability to do that for plausible deniability and staving off existential dread in the email mines.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        Yeah I also worry the slop and spam is here to stay, it’s easy enough to make, of as passable quality for the garbage uses people want from it, and if GPUs/compute go down in price, affordable enough for the spammers and account boosters and karma farmers and such to keep using it.