Summary

Fable, a social media app focused on books, faced backlash for its AI-generated 2024 reading summaries containing offensive and biased commentary, like labeling a user a “diversity devotee” or urging another to “surface for the occasional white author.”

The feature, powered by OpenAI’s API, was intended to be playful and fun. However, some of the summaries took on an oddly combative tone, making inappropriate comments on users’ diversity and sexual orientation.

Fable apologized, disabled the feature, and removed other AI tools.

Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.

  • HubertManne@moist.catsweat.com
    link
    fedilink
    arrow-up
    1
    ·
    20 hours ago

    woke is one of those funny things. they have effectively utlized it in a negative way but it is more relevant today than ever. It literally is about waking up and seeing how fucked things are around us.

  • snekerpimp@lemmy.world
    link
    fedilink
    arrow-up
    99
    ·
    4 days ago

    If these llms are trained on the garbage of the internet, why is everyone surprised they keep spitting out vitriol?

  • geekwithsoul@lemm.ee
    link
    fedilink
    English
    arrow-up
    47
    ·
    4 days ago

    Every time I see a story like this, I’m always pretty sure it’s an AI that was trained on Reddit content.

  • dx1@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    3 days ago

    It’s funny that naive AI runs into the same issue as crowd-sourcing or democratic control of content. Namely, a stupid userbase creates stupid content. If it doesn’t have insight, it can’t be insightful.

  • CharlesDarwin@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    This seems like it already happened before? Didn’t M$ have some bot that started parroting pro-Hitler things?

    • Ashelyn
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Tay? Yeah it did but that was mostly due to a 4chan ‘model poisoning’ campaign at the time.

  • octopus_ink@lemmy.ml
    link
    fedilink
    English
    arrow-up
    76
    ·
    4 days ago

    How interesting will it be if racism is what saves us from the AI slop shovelfest that every company is currently participating in.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    They keep ratcheting and tweaking the guardrails. But the problem’s still there. Probably three nines worth of the responses are working as intended, but once in a while, when you’re asking it for creative output, you get a spicy session.

    I’ve been using the commercial product to write stories for me to read to my child. I roughly have it modeling after the characters in Bluey but giving them all different names. I give the prompt a very concise level of demands, I set up the length, tone, protagonists, antagonists, struggle, ultimate solution, sometimes a few of the things to try that fail and the target reading level. I’ll give the antagonists traits and details. I might even set up an occasional internal fear or conceptual internal dialogue that the protagonists use to help solve the struggle.

    Most of the time I get pretty much what I ask for. I’ve probably made 50 or 60 of them by now. But a few weird things come up now and then. Not infrequently, an unexpected stranger who comes along to help save the day ends up being invited to join the family, which is a little quirky but not horrible. But this one time, the reformed antagonist was immediately invited into the family to join the parental class and get married into the already existing union. When I conversationally mentioned not doing that in future stories it got really defensive, told me that I should respect that love is love, and started sounding emotionally upset that I didn’t want to invite recently inverted villains to become the parents of the characters in the story. I responded back that those things weren’t impossible but there needs to be sufficient time to cover safety aspects and it started getting darker… So I reset the session and started back over.

  • Donkter@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    3 days ago

    Fable apologized, disabled the feature, and removed other AI tools.

    Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.

    What? I doubt these “critics” exist beyond this article having to have an open-ended closer.

  • TwoBeeSan@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    4 days ago

    They got rid of it asap and then organized a zoom call with all users.

    At least they listened. Have heard generally positive things from people who use fable.

  • funkless_eck@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    ·
    4 days ago

    I won’t say AI doesn’t have its edgecase uses, and I know people sneer at “prompt engineering” - but you really gotta put as much if not more effort into the prompt as it would to make a dumb if-case machine.

    Several paragraphs explaining and contextualizing the AI’s role, then the task at hand, then how you want the output to be, and any additional input. It should be at least 10 substantial paragraphs - but even then you’ve probably not got a bunch of checks for edgecases, errors, formatting, malicious intent from the user…

    It’s a less secure, less specific, less technical, higher risk, higher variable, untrustworthy “programming language” interface that enveigles and abstracts the interaction with the data and processors. It is not a person.