• 69 Posts
  • 736 Comments
Joined 1 year ago
cake
Cake day: February 2nd, 2024

help-circle

  • New thread from Ed Zitron, gonna focus on just the starter:

    You want my opinion, Zitron’s on the money - once the AI bubble finally bursts, I expect a massive outpouring of schadenfreude aimed at the tech execs behind the bubble, and anyone who worked on or heavily used AI during the bubble.

    For AI supporters specifically, I expect a triple whammy of mockery:

    • On one front, they’re gonna be publicly mocked for believing tech billionaires’ bullshit claims about AI, and publicly lambasted for actively assisting tech billionaires’ attempts to destroy labour once and for all.

    • On another front, their past/present support for AI will be used as grounds to flip the bozo bit on them, dismissing whatever they have to say as coming from someone incapable of thinking for themselves.

    • On a third front, I expect their future art/writing will be immediately assumed to be AI slop and either dismissed as not worth looking at or mocked as soulless garbage made by someone who, quoting David Gerard, “literally cannot tell good from bad”.








  • the model was supposed to be trained solely on his own art and thus I didn’t have any ethical issues with it.

    Personally, I consider training any slop-generator model to be unethical on principle. Gen-AI is built to abuse workers for corporate gain - any use or support of it is morally equivalent to being a scab.

    Fast-forward to shortly after release and the game’s AI model has been pumping out Elsa and Superman.

    Given plagiarism machines are designed to commit plagiarism (preferably with enough plausible deniability to claim fair use), I’m not shocked.

    (Sidenote: This is just personal instinct, but I suspect fair use will be gutted as a consequence of the slop-nami.)





  • I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.

    Personally, I’d prefer deleting such models and banning them altogether. Chatbots are designed to tell people what they want to hear, and to make people become friends with them - the mental health crises we are seeing are completely by design.




  • I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.

    Considering that AI bros are

    1. utterly malicious scumbags who hate anything which doesn’t let them, and them alone, make all the money ever

    2. exceedingly stupid and shameless dipshits with a complete inability to recognise or learn from mistakes

    I can absolutely see them looking at someplace like NG and thinking “hey, this place which stands for everything we want wiped off the Internet will totally accept our fucking slop”.

    (Personal sidenote: Part of me says this story would probably make a good Pivot to AI.)