• @theherk@lemmy.world
    link
    fedilink
    English
    486 months ago

    Using AI to notify a poster that a post is likely to run afoul of Reddit or community guidelines before posting actually seems like an interesting albeit fraught idea.

    If he weren’t so Speztic about everything, I would not feel so confident it was a nefarious plan to hurt people. But he is evil and impulsive, so… fuck Spez.

    • Neato
      link
      fedilink
      226 months ago

      Yeah, an AI telling you your post will get deleted sounds like a great way to suppress specific information. Political ideas the mods/admins don’t like? Pushing back against right-wing hate? Calling out blatant advertising? I can think of lots of ways this will probably be abused to steer conversations into advertiser-friendly topics.

      • FaceDeer
        link
        fedilink
        106 months ago

        I mean, that’s what moderators do anyway. It all comes down to what the rules of a sub are, and those rules are set and enforced by human moderators. I think it’ll be interesting to see how it goes with a less capricious AI in the loop.

    • @SkyNTP@lemmy.ml
      link
      fedilink
      English
      86 months ago

      You guys are twisting yourself in knots about Spez. This isn’t the doing of one person. This is the inevitable conclusion of every business model where you aren’t the paying customer. You are cattle. Kept alive just minimally to be slaughtered and sold to the highest bidder. The advertisers get new features. You are catered to only so much to keep you on the platform, but your comfort will be sacrificed the instant there is profit to be made.

      • @helenslunch@feddit.nl
        link
        fedilink
        English
        2
        edit-2
        6 months ago

        This is the inevitable conclusion of every business model where you aren’t the paying customer.

        FTFY

        Doesn’t matter if you pay or how much, they’re all collecting your data now because no one gives a shit at the end of the day.

    • admiralteal
      link
      fedilink
      76 months ago

      That’s not really innovative though. Auto moderator bots have been sending out warnings like this based on simple keyword criteria for years.

      • TimeSquirrel
        link
        fedilink
        66 months ago

        Yes, to suppress swearing or offensive content, not suppress ideas. You could still talk about a touchy subject by filtering out keywords and using substitutions.

        • admiralteal
          link
          fedilink
          1
          edit-2
          6 months ago

          It could search for all kinds of keywords to enforce rules. For example, scan titles to find question identifiers to suggest a user maybe needed to check an FAQ/wiki, or that kind of thing. Find keywords to detect probable off-topics. That sort of stuff.

          At the end of the day, is what the LLM bot doing really any different? I’d say it’s more sophisticated but the same fundamental thing.

      • snooggums
        link
        fedilink
        56 months ago

        Exactly.

        AI moderation is just word and phrase filtering, the latter of which wasn’t done earlier because it is really complicated due to the vast number of possible combinations of words and context. It also has the same failure issues as word filtering where it will end up being overly restrictive to the point of hilarity or will soon show that no matter what you filter someone will find a way around it.

        • admiralteal
          link
          fedilink
          3
          edit-2
          6 months ago

          I mean, suppose the LLM bot is actually good at avoiding false positives/misunderstandings – doesn’t that simply remove one of the biggest weaknesses of old-fashioned keyword identification? I really just see this as a natural evolution of the technology and not some new, wild thing. It’s just an incremental improvement.

          What it absolutely does NOT do is replace the need for human judgement. You’ll still need an appeals process and a person at the wheel to deal with errors and edge cases. But it’s pretty easy to imagine an LLM bot doing at least as well a job as the average volunteer Reddit/Discord mod.

          Of course, it’s kind of a moot point. Running a full LLM bot, parsing every comment against some custom-design model, as your automoderator would be expensive. I really cannot see it happening routinely, at least not with current tech costs. Maybe in a few years the prices will have come down enough, but not right now.

          • snooggums
            link
            fedilink
            36 months ago

            suppose the LLM bot is actually good at avoiding false positives/misunderstandings

            No, I don’t think I will.

    • Nougat
      link
      fedilink
      26 months ago

      It’ll just be another avenue through which Huffman can put his thumb on the scales and tune the content to his own liking.