- cross-posted to:
- technology@beehaw.org
- ghazi
- cross-posted to:
- technology@beehaw.org
- ghazi
Summary
Fable, a social media app focused on books, faced backlash for its AI-generated 2024 reading summaries containing offensive and biased commentary, like labeling a user a “diversity devotee” or urging another to “surface for the occasional white author.”
The feature, powered by OpenAI’s API, was intended to be playful and fun. However, some of the summaries took on an oddly combative tone, making inappropriate comments on users’ diversity and sexual orientation.
Fable apologized, disabled the feature, and removed other AI tools.
Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.
woke is one of those funny things. they have effectively utlized it in a negative way but it is more relevant today than ever. It literally is about waking up and seeing how fucked things are around us.
If these llms are trained on the garbage of the internet, why is everyone surprised they keep spitting out vitriol?
Garbage from the Internet for the Internet.
For gamers, by gamers.
It’s like with all the other terrible ideas that we wrote about in sci-fi. The jokes about a general ai finding the internet and then deciding to nuke us all have been around for decades.
Then they fucking trained the llms on that very data.
We will deserve our fate. At least the assholes on the web who trained that shit will.
GIGO
Every time I see a story like this, I’m always pretty sure it’s an AI that was trained on Reddit content.
Why not 4chan?
I actually had that thought as well, and while they certainly might, I think they’re aiming more for the people who add “reddit” to a Google search when looking for answers.
It’s funny that naive AI runs into the same issue as crowd-sourcing or democratic control of content. Namely, a stupid userbase creates stupid content. If it doesn’t have insight, it can’t be insightful.
This seems like it already happened before? Didn’t M$ have some bot that started parroting pro-Hitler things?
Tay? Yeah it did but that was mostly due to a 4chan ‘model poisoning’ campaign at the time.
How interesting will it be if racism is what saves us from the AI slop shovelfest that every company is currently participating in.
Microsoft cut off their chatbot because of this years ago, lol
https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
Yeah I remember that one too. 🤣
In the Trusk era, the racism will be seen as a bonus.
They keep ratcheting and tweaking the guardrails. But the problem’s still there. Probably three nines worth of the responses are working as intended, but once in a while, when you’re asking it for creative output, you get a spicy session.
I’ve been using the commercial product to write stories for me to read to my child. I roughly have it modeling after the characters in Bluey but giving them all different names. I give the prompt a very concise level of demands, I set up the length, tone, protagonists, antagonists, struggle, ultimate solution, sometimes a few of the things to try that fail and the target reading level. I’ll give the antagonists traits and details. I might even set up an occasional internal fear or conceptual internal dialogue that the protagonists use to help solve the struggle.
Most of the time I get pretty much what I ask for. I’ve probably made 50 or 60 of them by now. But a few weird things come up now and then. Not infrequently, an unexpected stranger who comes along to help save the day ends up being invited to join the family, which is a little quirky but not horrible. But this one time, the reformed antagonist was immediately invited into the family to join the parental class and get married into the already existing union. When I conversationally mentioned not doing that in future stories it got really defensive, told me that I should respect that love is love, and started sounding emotionally upset that I didn’t want to invite recently inverted villains to become the parents of the characters in the story. I responded back that those things weren’t impossible but there needs to be sufficient time to cover safety aspects and it started getting darker… So I reset the session and started back over.
Fable apologized, disabled the feature, and removed other AI tools.
Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.
What? I doubt these “critics” exist beyond this article having to have an open-ended closer.
They got rid of it asap and then organized a zoom call with all users.
At least they listened. Have heard generally positive things from people who use fable.
I won’t say AI doesn’t have its edgecase uses, and I know people sneer at “prompt engineering” - but you really gotta put as much if not more effort into the prompt as it would to make a dumb if-case machine.
Several paragraphs explaining and contextualizing the AI’s role, then the task at hand, then how you want the output to be, and any additional input. It should be at least 10 substantial paragraphs - but even then you’ve probably not got a bunch of checks for edgecases, errors, formatting, malicious intent from the user…
It’s a less secure, less specific, less technical, higher risk, higher variable, untrustworthy “programming language” interface that enveigles and abstracts the interaction with the data and processors. It is not a person.
and the bot still tends to ignore some instructions
Remimds me of Tay AI