- cross-posted to:
- geekdom@kbin.social
- cross-posted to:
- geekdom@kbin.social
The AI Deepfakes Problem Is Going to Get Unstoppably Worse::Deepfakes are blurring the lines of reality more than ever before, and they’re likely going to get a lot worse this year.
Me again, re:watermarks.
Frauds, liars and even pranksters will not watermark their content, or remove watermarks. Best you can do is get genAI services to implement one, which they already do. It’s an insignificant business expense.
So there is a situation where most genAI content is marked, except that content which you actually want to identify. The result of watermarks must be to make fraudulent content more credible. It makes the situation worse.
My biggest worry is that we reach a situation similar to the war on drugs, where unthinking moral panic causes society to double down on harmful “solutions”. You have to think about how this could possibly be enforced and against whom.
GenAI models are about the size of movie. That is to say that they can be torrented just as easily. Stopping people from sharing non-watermarked generators would require an unprecedented level of internet surveillance. The people caught would, IMHO, be the same kind of people caught torrenting movies; mainly kids. The fraudsters can be prosecuted for fraud, anyway, if you catch them. A seriously enforced watermarking law would, IMHO, only prosecute kids and other, basically, harmless people (though they may be using genAI to bully and harass their peers).
Training AI models is not as expensive as one may think. The expensive part is the custom-made training data, as well as the research; the trial and error. Even something as massive as ChatGPT could be trained for less than $5 million. An image generator can probably be trained for less than $100k. In light of that report that someone defrauded a company for $25 million, that’s a cheap investment; maybe something you could monetize on the dark net. You’d have to successfully crack down on the dark net in unprecedented ways.
You’d need close monitoring of anything happening with cloud computing. You’d need to require licenses for high-end GPUs.
The problem isn’t that new in principle. I remember police advice to hang up and call back under a known number, if someone identified as a police office on the phone. I also remember the 1964 move Fail Safe; a Cold War classic. A squadron of US bombers is accidentally sent on a nuclear raid against Moscow. IDK if the depiction of military practice is in any way accurate. The bombers pass the fail-safe point, after which they can no longer be recalled. In an attempt to stop them, they are radioed by the president and even their wives. They ignore it as trained, because it might be a soviet trick, imitating the voices. So, IDK if bomber crews were really ever trained to expect voice imitators, but even in the early 60ies it must have seemed sufficiently credible to movie audiences.