A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • TheBest@midwest.social
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    7 months ago

    This actually opens an interesting debate.

    Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what’s happening.

    Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?

    We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?

    My post is more tangentially related to original article, but I’m still curious as what the common consensus is.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      Every photo you take with your phone is post processed.

      Years ago, I remember looking at satellite photos of some city, and there was a rainbow colored airplane trail on one of the photos. It was explained that for a lot of satellites, they just use a black and white imaging sensor, and take 3 photos while rotating a red/green/blue filter over that sensor, then combining the images digitally into RGB data for a color image. For most things, the process worked pretty seamlessly. But for rapidly moving objects, like white airplanes, the delay between the capture of red/green/blue channel created artifacts in the image that weren’t present in the actual truth of the reality being recorded. Is that specific satellite method all that different from how modern camera sensors process color, through tiny physical RGB filters over specific subpixels?

      Even with conventional photography, even analog film, there’s image artifacts that derive from how the photo is taken, rather than what is true of the subject of the photograph. Bokeh/depth of field, motion blur, rolling shutter, and physical filters change the resulting image in a way that is caused by the camera, not the appearance of the subject. Sometimes it makes for interesting artistic effects. But it isn’t truth in itself, but rather evidence of some truth, that needs to be filtered through an understanding of how the image was captured.

      Like the Mitch Hedberg joke:

      I think Bigfoot is blurry, that’s the problem. It’s not the photographer’s fault. Bigfoot is blurry, and that’s extra scary to me.

      So yeah, at a certain point, for evidentiary proof in court, someone will need to prove some kind of chain of custody that the image being shown in court is derived from some reliable and truthful method of capturing what actually happened in a particular time and place. For the most part, it’s simple today: i took a picture with a normal camera, and I can testify that it came out of the camera like this, without any further editing. As the chain of image creation starts to include more processing between photons on the sensor and digital file being displayed on a screen or printed onto paper, we’ll need to remain mindful of the areas where that can be tripped up.

      • NoRodent@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        The crazy part is that your brain is doing similar processing all the time too. Ever heard of the blindspot? Your brain has literally zero data there but uses “content-aware fill” to hide it from you. Or the fact, that your eyes are constantly scanning across objects and your brain is merging them into a panorama on the fly because only a small part of your field of vision has high enough fidelity. It will also create fake “frames” (look up stopped-clock illusion) for the time your eyes are moving where you should see a blur instead. There’s more stuff like this, a lot of it manifests itself in various optical illusions. So not even our own eyes capture the “truth”. And then of course the (in)accuracy of memory when trying to recall what we’ve seen, that’s an entirely different can of worms.

      • TheBest@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        Fantasitc expansion of my thought. This is something that isn’t going to be answered with an exact scientific value but will have to decided based on our human experiences with the tech. Interesting times ahead.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Computational photography in general gets tricky because it relies on your answer to the question “Is a photograph supposed to reflect reality, or should it reflect human perception?”

      We like to think those are the same, but they’re not. Your brain only has a loose interest in reality and is much more focused on utility. Deleting the irrelevant, making important things literally bigger, enhancing contrast and color to make details stand out more.
      You “see” a reconstruction of reality continuously updated by your eyes, which work fundamentally differently than a camera.

      Applying different expose settings to different parts of an image, or reconstructing a video scene based on optic data captured over the entire video doesn’t capture what the sensor captured but it can come much closer to representing what the human holding the camera perceived.
      Low light photography is a great illustration of this, because we see a person walk from light to dark and our brains will shamelessly remember what color their shirt was and that grass is green and update your perception, as well as using a much longer “exposure” time to capture more light data to maintain color perception in low light conditions, even though we might not have enough actual light to make those determinations without clues.

      I think most people want a snapshot of what they perceived at the moment.
      I like the trend of the camera capturing the image, and also storing the “plain” image. There’s also capturing the raw image data, which is basically a dump of the cameras optic sensor data. It’s basically what the automatic post processing is tweaking, and what human photographers use to correct light balance and stuff.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        There’s different types of computational photography, the ones which ensures to capture enough sensor data to then interpolate in a way which accurately simulates a different camera/lighting setup are in a way “more realistic” than the ones which heavily really on complex algorithms to do stuff like deblurring. My point is essentially that the calculations done has to be founded in physics rather than in just trying to produce something artistic.

      • TheBest@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Great points! Thanks for expanding. I agree with your point that people most often want a recreation of what was perceived. Its going to make this whole AI enhanced eviidence even more nuanced when the tech improves.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I think the “best” possible outcome is that AI images are essentially treated as witness data, as opposed to direct evidence. (Best is meant in terms of how we treat AI enhanced images, not justice outcomes. I don’t think we should use them for such things until they’re significantly better developed, if ever)

          Because the image is essentially at that point a neural networks interpretation of the image that it captured, which is functionally similar to a human testifying to what they believe they saw in an image.

          I think it could have a use if used in conjunction with the original or raw image, and the network can explain what drive it’s interpretation, which is a tricky thing for a lot of neural network based systems.
          That brings it much closer to how doctors are using them for imaging analysis. It doesn’t supplant the original, but points to part of it with an interpretation, and a synopsis of why it things that blob is a tumor/gun.

    • fuzzzerd@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      This is what I was wondering about as I read the article. At what point does the post processing on the device become too much?

        • fuzzzerd@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          What would you classify google or apple portrait mode as? It’s definitely doing something. We can probably agree, at this point it’s still a reasonably enhanced version of what was really there, but maybe a Snapchat filter that turns you into a dog is obviously too much. The question is where in that spectrum is the AI or algorithm too much?

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago

            It varies, there’s definitely generative pieces involved but they try to not make it blatant

            If we’re talking evidence in court then it’s practically speaking more important if the photographer themselves can testify about how accurate they think it is and how well it corresponds to what they saw. Any significantly AI edited photo effectively becomes as strong evidence as a diary entry written by a person on the scene, it backs up their testimony to a certain degree by checking for the witness’ consistency over time instead of trusting it directly. The photo can lie just as much as the diary entry can, so it’s a test for credibility instead.

            If you use face swap then those photos are likely nearly unusable. Editing for colors and contrast, etc, still usable. Upscaling depends entirely on what the testimony is about. Identifying a person that’s just a pixelated blob? Nope, won’t do. Same with verifying what a scene looked like, such as identifying very pixelated objects, not OK. But upscaling a clear photo which you just wanted to be larger, where the photographer can attest to who the subject is? Still usable.

    • jballs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      I was wondering that exact same thing. If I take a portrait photo on my Android phone, it instantly applies a ton of filters. If I had taken a picture of two people, and then one of those people murders the other shortly afterwards, could my picture be used as evidence to show they were together just before the murder? Or would it be inadmissible because it was an AI-doctored photo?