• 43 Posts
  • 468 Comments
Joined 1 年前
cake
Cake day: 2024年2月2日

help-circle

  • New thread from Baldur Bjarnason:

    Keep hearing reports of guys trusting ChatGPT’s output over experts or even actual documentation. Honestly feels like the AI Bubble’s hold over society has strengthened considerably over the past three months

    This also highlights my annoyance with everybody who’s claiming that this tech will be great if every uses it responsibly. Nobody’s using it responsibly. Even the people who think they are, already trust the tech much more than it warrants

    Also constantly annoyed by analysis that assumes the tech works as promised or will work as promised. The fact that it is unreliable and nondeterministic needs to be factored into any analysis you do. But people don’t do that because the resulting conclusion is GRIM as hell

    LLMs add volatility and unpredictability to every system they touch, which makes those systems impossible to manage. An economy with pervasive LLM automation is an economy in constant chaos

    On a semi-related note, I expect the people who are currently making heavy use of AI will find themselves completely helpless without it if/when the bubble finally bursts, and will probably struggle to find sympathy from others thanks to AI indelibly staining their public image.

    (The latter part is assuming heavy AI users weren’t general shitheels before - if they were, AI’s stain on their image likely won’t affect things either way. Of course, “AI bro” is synonymous with “trashfire human being”, so I’m probably being too kind to them :P)








  • At this point, using AI in any sort of creative context is probably gonna prompt major backlash, and the idea of AI having artistic capabilities is firmly dead in the water.

    On a wider front (and to repeat an earlier prediction), I suspect that the arts/humanities are gonna gain some begrudging respect in the aftermath of this bubble, whilst tech/STEM loses a significant chunk.

    For arts, the slop-nami has made “AI” synonymous with “creative sterility” and likely painted the field as, to copy-paste a previous comment, “all style, no subtance, and zero understanding of art, humanities, or how to be useful to society”

    For humanities specifically, the slop-nami has also given us a nonstop parade of hallucination-induced mishaps and relentless claims of AGI too numerous to count - which, combined with the increasing notoriety of TESCREAL, could help the humanities look grounded and reasonable by comparison.

    (Not sure if this makes sense - it was 1AM where I am when I wrote this)






  • New piece from Brian Merchant: DOGE’s ‘AI-first’ strategist is now the head of technology at the Department of Labor, which is about…well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:

    “I think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,” Blanc tells me. “That’s much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.”

    How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as “improving efficiency” or “politically neutral” or some random claptrap like that. Between Musk’s own crippling incompetence, AI’s utterly rancid public image, and a variety of factors I likely haven’t factored in, imposing them will likely prove harder than they thought.

    (I’d also like to recommend James Allen-Robertson’s “Devs and the Culture of Tech” which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)






  • Ran across a short-ish thread on BlueSky which caught my attention, posting it here:

    the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made. i have yet to see one that’s ‘good’ but i don’t doubt the tech will soon be advanced enough to write ‘well.’ but i’d rather see what a person thinks and how they’d phrase it

    like i don’t want to see fiction in the style of cormac mccarthy. i’d rather read cormac mccarthy. and when i run out of books by him, too bad, that’s all the cormac mccarthy books there are. things should be special and human and irreplaceable

    i feel the same way about using AI-type tech to recreate a dead person’s voice or a hologram of them or whatever. part of what’s special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself