To be fair, though, this experiment was stupid as all fuck. It was run on /r/changemyview to see if users would recognize that the comments were created by bots. The study’s authors conclude that the users didn’t recognize this. [EDIT: To clarify, the study was seeing if it could persuade the OP, but they did this in a subreddit where you aren’t allowed to call out AI. If an LLM bot gets called out as such, its persuasiveness inherently falls off a cliff.]
Except, you know, Rule 3 of commenting in that subreddit is: “Refrain from accusing OP or anyone else of being unwilling to change their view, of using ChatGPT or other AI to generate text, [emphasis not even mine] or of arguing in bad faith.”
It’s like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. “Obviously these are all brainwashed sheep who love the regime”, happily concludes the dumbest pollster in history.
Wow. That’s really fucking stupid.
I don’t think so. Yeah the researchers broke the rules of the subreddit but it’s not like every other company that uses AI for advertising, promotional purposes, propaganda, and misinformation will adhere to those rules.
The mods and community should not assume that just because the rules say no AI does not mean that people won’t use it for nefarious purposes. While this study doesn’t really add anything new we didn’t already know or assume, it does highlight how we should be vigilant and cautious about what we see on the Internet.
Reread the rule @TheTechnician27@lemmy.world listed; it’s not a rule against posting AI, it’s a rule against accusing people of posting AI, the very thing they were trying to prompt people to do.
So, if nobody accuses them, is it because nobody noticed, or is it because nobody wanted to break the no-accusing rule? It’s impossible to tell, which makes the results of the study worthless.
And even if they did accuse, mods would have removed the comments.
Ah well it would appear I should have read the original comment with a little more attention
It’s like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. “Obviously these are all brainwashed sheep who love the regime”, happily concludes the dumbest pollster in history.
I don’t particularly like this analogy, because /r/changemyview isn’t operating in a country where an occupying army was bombing weddings a few years earlier.
But this goes back to the problem at hand. People have their priors (my bots are so sick nasty that nobody can detect them / my liberal government was so woke and cool that nobody could possibly fail to love it) and then build their biases up around them like armor (any coordinated effort to expose my bots is cheating! / anyone who prefers the new government must be brainwashed!)
And the Bayesian Reasoning model fixates on the notion that there are only ever a discrete predefined series of choices and uniform biases that the participant must navigate within. No real room for nuance or relativism.
Manipulating users with AI bots to research what, exactly.
Researching what!!!
Deleted by moderator because you upvoted a Luigi meme a decade ago
…don’t mind me, just trying to make the reddit experience complete for you…
that’s funny.
I had several of my Luigi posts and comments removed – on Lemmy. let’s see if it still holds true.
.world is known (largely due to the Luigi Mangione stuff) to have moderation that’s a bit more heavy handed and more similar to the sort of “corporate Internet”.
No real hate for them and they’ve indicated in the past that some of their actions are just to comply with their local laws. But if you’re looking for an older internet experience you’ll wanna move to a different instance.
That’s why I left .world in December. I get why they did it, but it just showed I don’t want to be in the most popular instance since they’re always going to be the first one targeted and are more censorship happy as a result.
Well then, as lemmy’s self-designated High Corvid of Progressivity, I extend to you the traditional Fediversal blessing of:
remember kids:
A place in heaven is reserved for those who speak truth to power
Lemmy is a collection of different instances with different administrators, moderators, and rules.
this was Lemmy.world that did it.
last I knew anything that had the word “Luigi” in the meme was blocked.
Then move your ass over to a different instance. That’s the entire point of lemmy
Last I heard Lemmy.ml and Lemmy.world are the most toxic, Reddit-like instances, so it might be perfectly in lone with their usual way of ruling
world may be reddit like and toxic, and this may be due to its high number of users.
However lemmy.ml is nothing like reddit. Nor is it toxic, unless you diss communism.
Err, yeah, I get the meme and it’s quite true in its own way…
BUT… This research team REALLY need an ethics committee. A heavy handed one.
You dare suggest that corporations are anything but our nearest and dearest friends? They’d never sell us out. Never!
That story is crazy and very believable. I 100% believe that AI bots are out there astroturfing opinions on reddit and elsewhere.
I’m unsure if that’s better or worse than real people doing it, as has been the case for a while.
Belief doesn’t even have to factor; it’s a plain-as-day truth. The sooner we collectively accept this fact, the sooner we change this shit for the better. Get on board, citizen. It’s better over here.
I worry that it’s only better here right now because we’re small and not a target. The worst we seem to get are the occasional spam bots. How are we realistically going to identify LLMs that have been trained on reddit data?
Honestly? I’m no expert and have no actionable ideas in that direction, but I certainly hope we’re able to work together as a species to overcome the unchecked greed of a few parasites at the top. #LuigiDidNothingWrong
What is likely happening is that bots are manipulating bots
$0.50 says that the “reveal” was part of the study protocol. I.e. “how people react to being knowingly vs. unknowingly manipulated”.
Seems dangerous, it’s a breach of the ToS I assume so they’re opening up to possible liability if Reddit got pissy. I’m actually surprised this kind of research gets IRB and other approval given you’re violating ToS unless given a variance from it (I used to conduct research on social networks and had to get preapproved accounts for the purpose, and the data I was given was carefully limited.)
So they banned the people that successfully registered a bunch of AI bots and had them fly under the mods radar. I’m sure they’re devastated and will never be able to get on the site again…
After all, it’s all about con$$ent, eh?
Insert same picture meme
*on a sub that explicitly bans you for pointing it out
Very good research.