- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.
It’s still a good thing. The alternative is people posting AI content as though it is real content, which is a worldwide problem destroying entire industries. All AI content should by law have to be clearly labeled.
Then what AI generated slop without label are to the plain eyes? That label just encourge the laziness of the brain as an “easy filter.” Those slop without label just evelated itself to be somewhat real, becuase the label exist exploiting the laziness.
Before you said some AI slop are clearly identifiable, you can’t rule out everyone can, and every piece are that identifiable. And for those images that looks a little unrealistic, just decrease the resolution to very grainy and hide those details. That will work 9 out of 10. You can’t rule out that 0.1% content that pass sanity check can’t do 99.9% damage.
After all, human are emotional creatures, and sansationism is real. The urge of share something emotional is why misinformation and disinformation are so common these days. People will overlook details when the urge hits.
Somethimes, labeling can do more harm than good. It just give a false sense.
Just because something is theoretically circumventable doesn’t mean we shouldn’t make it as hard as possible to circumvent it.
The reason why misinformation is so common these days is because of concerted effort by fascists to obtain control over media companies. Once they are in power and have significant influence within those companies they can poison them, turning them into massive misinformation engines churning out content at a pace even faster than we ever believed possible. This problem has existed since the rise of mass media especially in the 19th century. But social media presents far faster and more direct throughlines to spreading misinformation to the masses.
And those masses do not care if something is labeled as AI or not. They will believe it one way or the other. This still doesn’t change that it is necessary to directly label AI generated content as such. What is and isn’t made by a human is extremely important. We cannot equate algorithms with people, and it’s necessary to make that distinguishment as clearly as possible.
The problem is you can’t make a digital label that hard to circumvent. Much like a signature, you sign something you want to prove it is genuinely from you, but you won’t sign something that’s not from you while not signing things that are, especially in digital format. Digital signature can just be stripped out of the data. Watermarks on images can now patched with the help of inpainting models. Disclaimers in text can just be deleted. The default shouldn’t be “This thing doesn’t have an AI label so it would be written by human.” The label itself it a slippery slope that helps misinformation spread faster and aid building alternate facts. Adding a label won’t help people identify contents generated with ML models, but let them defer the identification to that mere label because it said so, or didn’t. Misinformation didn’t spread fast simply because fascists obtained controls on medias. Just look at how China, Russia, and Iran launch misinformation campaign. They didn’t have to control those media, but some seed accounts that make sensational title that attracts people in more powerful position and recognition to spread it out. For more info on misinformation and disinformation, I recommend you watch Ryan McBeth’s video on YT.
Yes, we need a way to identify what is and what not generated by ML models, but that should not be done by labeling ML contents by ML models.