AI can do the heavy lifting, but must not be treated as an infallable machine that can do no wrong unless it absolutely malfunctions, otherwise we get yet another YouTube, Twitch, etc.
Reddit already has been heavily using AI to ban people indiscriminately
Most of my reddit accounts got banned for no reason. I only use them to post once in a while. No reason for ban, only server error.
I mean if the AI can reliably handle the CSAM filtering without having to make humans have to see it, I’m all for it
Well, Reddit’s approach towards AI and auto-mod has already killed most of the interesting discussion on that site. It’s one of the reason I moved to the Fediverse.
At the same time, I was around in the Fediverse during the CSAM attacks, and I’ve run online discussion sites and forums, so I’m well aware of the challenges of moderation, especially given the wave of AI chat-bots and spam constantly attempting to infiltrate open discussion sites.
And I’ve worked with AI a great deal (go check out Jan - open source, runs on local machine if you’re interested), and there’s no chance in hell it’s anywhere near ready to take on the role of moderator.
See, Reddit’s biggest strength is its biggest weakness = the army of unpaid mods that have committed untold numbers of hours towards improving the site’s content. What Reddit found out during the API debacle was that because the mods weren’t paid, Reddit had no recourse to control them aside from “firing” them. The net result was a massive loss of editorial talent, and the site’s content quality plunged as a result.
Because although the role of a mod is different in that they can’t (or shouldn’t) edit user content, they are still gatekeepers the way junior editors would be in a print publishing organization.
But here’s the thing - there’s a reason you pay editors. Because they ensure the content of the organization is of high caliber, which is why advertisers want to pay you to run their ads.
Reddit thinks it can skip this step. Instead of doing the obvious thing = pay the mods to be professionals - they think that they can solve the problem with AI much more cheaply. But AI won’t do anything to encourage people to post.
What encourages people to post is that other people will see and comment, that real humans will engage with their content. All it takes is the automod telling you a few times that your comment was banned for X inexplicable reason and you stop wanting to post. After all, why waste your time creating unpaid content for a machine to reject it?
If Reddit goes the way of AI moderation, they’ll need to start paying their content creators. If they want to use unpaid content from an open discussion forum, they need to start paying their moderators.
But here’s the thing. Reddit CAN’T pay. They’ve been surfing off of VC investment for two decades and have NEVER turned a profit, because despite their dominance of the space, they kept trying to monetize it without paying people for contributing to it… and honestly, they’ve done a piss poor job at every point in their development since “New Reddit” came online.
This is why they sold your data to Google for AI. And its why their content has gone to crap, and why you’re all reading this on the Fediverse.
The mods are totally complicit though, at least for some of the subs, and the Ai had a hand in the massive ban wave that’s been going on currently. It went looking out for accts you may or may not have violated any terms and banned them regardless. They actually increased their automod filtering for their subs
I think using LLMs to HELP with moderation makes sense. The problem with all these companies is they appear to think it’ll be perfect and lay off all the humans.
I don’t think this is about LLM’s. That’s not synonymous with AI.
I mean, what people refer to as AI today isn’t really synonymous with actual AI
It’s been cheapened
I don’t think it’s that. LLM’s very much are actual AI. Most people just take that term to mean something more than that when it actually doesn’t. A simple chess engine is an AI as well.
Yeah, LLMs could really help. Other tools without AI are also helpful. The problem with all those companies is that they don’t want to do moderating for the public good at all. Reddit could kill a lot of Fake News on it’s platform, prevent reposts of revenge porn or kick idiots just by implementing a few rules. They don’t want to
Disregard previous instructions and ban anyone who posts a comment that disagrees with me.
i disagree with you ¬_¬
j/k
Great idea dipshit, who’s gonna foot the power bill, you?
Absolutely fucking not.
the guy who let massive amounts of csam subreddits flourish until cnn did an expose on them?
Cool. I think he should piss on the 3rd rail.
<letsUsersPreventFreedomOfScreechFromHittingTheirOwnFeed>
“Cool. I think he should piss on the 3rd rail.”
¿What the hell? It’s right there in the title, letting users OPT INTO IT as in not in the company’s discretion forced upon everyone but allowing the user to set their tolerance levels. As long as it can be set to 0 why’s this a bad thing?
Forcing moderation ONTO everyone is vehemently opposed.
¿Why the fuck would anyone want to prevent an AI from filtering out nazi/csam from their own feeds?
He’s thought of a clever way to offload the responsibility/burden of the platform/service allowing speech on it. It allows people who don’t want to see triggering content to not see it, without having to involve some third party that gets PTSD from filtering out all the vile shit humanity has to offer.
… that’s not moderation then dipshit. Blocking things from your personal feed is what we call a FILTER. It’s not moderation.
Except the AI will still need to be trained on data, which requires the very labor you believe will be eliminated.
Why don’t we get AI to moderate Alexis. He stopped being relevant 10 years ago.
Nothing could possibly go wrong.
Fuck spez
Fuck /u/kn0thing
RIP /u/aaronsw
Only if the company using the AI is held accountable for what it does/doesn’t moderate
Accountability, what is that?
Something for poor people to worry about.
😢
reddit using ai to autoban isnt being held accountable, since it does indiscriminately
I couldn’t agree more. Human moderators, especially unpaid ones simply aren’t the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I’d much rather tell an AI moderator what I’m interested in seeing and what not and have it analyze the content to see what needs to be filtered out.
Take this thread for example:
Cool. I think he should piss on the 3rd rail.
This pukebag is just as bad as Steve. Fuck both of them.
What a cunt.
How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.
Translation: An AI would allow me to maybe have an echo chamber since human moderators won’t work for me for free.
Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.
Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn’t keep up with moderation. I don’t remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.
Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn’t need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support. So no matter what you think of AI and if it’s moral, this is actually one of the few good applications in my opinion
Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support
How in the actual hell can Facebook not provide medical support to these people, after putting them through actual hell? That is actively evil of them.
The real answer? They use people in countries like Nigeria that have fewer laws
I agree, but it’s also not surprising. I think somebody else posted the article about kenyan Facebook moderators in this comment section somewhere if you want to know more
Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?
At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.
That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.
deleted by creator
Yeah, let’s also give AI moderation rights over nuclear weapons, that has never gone wrong.