Social media is being flooded with spammy AI content. Research by Indiana University details how artificial intelligence is being used to scam people on social media platforms.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
A new study shared last month by researchers at Indiana University’s Observatory on Social Media details how malicious actors are taking advantage of OpenAI’s chatbot ChatGPT, which became the fastest-growing consumer AI application ever this February.
The rise of social media gave bad actors a cheap way to reach a large audience and monetize false or misleading content, Menczer said.
New AI tools “further lower the cost to generate false but credible content at scale, defeating the already weak moderation defenses of social-media platforms,” he said.
In the past few years, social-media bots — accounts that are wholly or partly controlled by software — have been routinely deployed to amplify misinformation about events, from elections to public-health crises such as COVID.
The AI bots in the network uncovered by the researchers mainly posted about fraudulent crypto and NFT campaigns and promoted suspicious websites on similar topics, which themselves were likely written with ChatGPT, the survey says.
Yang said that tracking suspects’ social-media activity patterns, whether they have a history of spreading false claims and how diverse in language and content their previous posts are, is a more reliable way to identify bots.
Researchers found over 1,000 AI spam bots on social media using ChatGPT to promote scams, especially in cryptocurrency. These bots imitate humans, making detection harder and potentially degrading online information quality. Without regulation, malicious actors could outpace efforts to combat AI-generated content, posing a threat to the internet’s reliability.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
A new study shared last month by researchers at Indiana University’s Observatory on Social Media details how malicious actors are taking advantage of OpenAI’s chatbot ChatGPT, which became the fastest-growing consumer AI application ever this February.
The rise of social media gave bad actors a cheap way to reach a large audience and monetize false or misleading content, Menczer said.
New AI tools “further lower the cost to generate false but credible content at scale, defeating the already weak moderation defenses of social-media platforms,” he said.
In the past few years, social-media bots — accounts that are wholly or partly controlled by software — have been routinely deployed to amplify misinformation about events, from elections to public-health crises such as COVID.
The AI bots in the network uncovered by the researchers mainly posted about fraudulent crypto and NFT campaigns and promoted suspicious websites on similar topics, which themselves were likely written with ChatGPT, the survey says.
Yang said that tracking suspects’ social-media activity patterns, whether they have a history of spreading false claims and how diverse in language and content their previous posts are, is a more reliable way to identify bots.
Saved 78% of original text.
Thank you good bot, please take my social media profile.
This is how they get you
It’s you! Guys, I found the bot!
Pitchforks, everyone!
Here’s a shorter summary:
Researchers found over 1,000 AI spam bots on social media using ChatGPT to promote scams, especially in cryptocurrency. These bots imitate humans, making detection harder and potentially degrading online information quality. Without regulation, malicious actors could outpace efforts to combat AI-generated content, posing a threat to the internet’s reliability.