Google is planning to roll out a technology that will identify whether a photo was taken with a camera, edited by software like Photoshop, or produced by generative AI models.
So they are going to use AI to detect AI. That should not present any problems.
You may be able to prove that a photo with certain metadata was taken by a camera (my understanding is that that’s the method), but you can’t prove that a photo without it wasn’t, because older cameras won’t have the necessary support, and wiping metadata is trivial anyway. So is it better to have more false negatives than false positives? Maybe. My suspicion is that it won’t make much difference to most people.
A fair few sites will also wipe image/EXIF metadata for safety reasons, since photo metadata can include things like the location where the photo was taken.
Even if you assume the images you care about have this metadata, all it takes is a hacked camera (which could be as simple as carefully taking a photo of your AI-generated image) to fake authenticity.
And the vast majority of images you see online are heavily compressed so it’s not 6MB+ per image for the digitally signed raw images.
You don’t even need a hacked camera to edit the metadata, you just need exiftool.
It’s not that simple. It’s not just a “this is or isn’t AI” boolean in the metadata. Hash the image, then sign the hash with digital signature key. The signature will be invalid if the image has been tampered with, and you can’t make a new signature without the signing key.
Once the image is signed, you can’t tamper with it and get away with it.
The vulnerability is, how do you ensure an image isn’t faked before it gets to the signature part? On some level, I think this is a fundamentally unsolvable problem. But there may be ways to make it practically impossible to fake, at least for the average user without highly advanced resources.
Cameras don’t cryptographically sign the images they take. Even if that was added, there are billions of cameras in use that don’t support signing the images. Also, any sort of editing, resizing, or reencoding would make that signature invalid. Almost no one is going to post pictures to the web without any sort of editing. Embedding 10+ MB images in a web page is not practical.
We aren’t talking about current cameras. We are talking about the proposed plan to make cameras that do cryptographically sign the images they take.
Here’s the link from the start of the thread:
This system is specifically mentioned in the original post: https://www.seroundtable.com/google-search-image-labels-ai-edited-38082.html when they say “C2PA”.
Lol, knowing the post processing done with your IPhone this whole thing sounds like an actual joke, does no one remember the fake moon incident? Your photos have been Ai generated for years and no one noticed, no algorithm on earth could tell the difference between a phone photo and an Ai photo because they are the same thing.
Are you saying the moon landing was faked or did I miss something?
You absolutely missed everything, the moon is fake literally… when you take a picture of the moon your camera uses AI photo manipulation to change your garage picture to a completely Ai generated image because taking pictures of the moon is actually pretty difficult so it makes pictures look much better and in %99 of cases it is better but in edge cases like trying to take a picture of something flying in front of the moon like the ISS or a cloud it is not, also it may cause issues if you try to introduce your photos in court because everything you take is inherently doctored.
Huh. I thought that was just based on promo “Space zoom” photos from Samsung and it never made it into the wild.
This was definitely just Samsung’s thing, but I had thought it made it out into the wild. Not 100% sure.
All the phone image post processing was literally what drove me to buy a Digital Full-frame Mirrorless camera. I know the raw photos coming off that are completely unedited, and I can choose to do any color correction or whatever myself. My previous Samsung phone always seemed to output smeary garbage when taking photos in the forest.
It’s of course troubling that AI images will go unidentified through this service (I am also not at all confident that Google can do this well/consistently).
However I’m also worried about the opposite side of this problem- real images being mislabeled as AI. I can see a lot of bad actors using that to discredit legitimate news sources or stories that don’t fit their narrative.
I watched a video on methods for detecting AI generation in images. One of the methods was comparing the noise on different color channels. Cameras have different noise in different channels while AI doesn’t. There is also stuff like JPG compression artifacts in other image formats.
So there are technical solutions to it but I wouldn’t know how to automate them.
Those would be easy things to add, if you were trying to pass it off as real.
Take a high-quality AI image, add some noise, blur, and compress it a few times.
Or, even better, print it and take a picture of the print out, making sure your photo of the photo is blurry enough to hide the details that would give it away.
Not sure how to fel about this, but if they are honest about the labels and accurate 100% of the time with labeling it’s a nice feature for independant fact checkers