A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • abhibeckert@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    7 months ago

    It preemptively also includes any other future technology that aims to try the same thing

    No it doesn’t. For example you can, with compute power, for distortions introduced by camera lenses/sensors/etc and drastically increase image quality. For example this photo of pluto was taken from 7,800 miles away - click the link for a version of the image that hasn’t been resized/compressed by lemmy:

    The unprocessed image would look nothing at all like that. There’s a lot more data in an image than you can see with the naked eye, and algorithms can extract/highlight the data. That’s obviously not what a generative ai algorithm does, those should never be used, but there are other algorithms which are appropriate.

    The reality is every modern photo is heavily processed - look at this example by a wedding photographer, even with a professional camera and excellent lighting the raw image on the left (where all the camera processing features are disabled) looks like garbage compared to exactly the same photo with software processing:

    • CapeWearingAeroplane@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      No computer algorithm can accurately reconstruct data that was never there in the first place.

      What you are showing is (presumably) a modified visualisation of existing data. That is: given a photo which known lighting and lens distortion, we can use math to display the data (lighting, lens distortion, and input registered by the camera) in a plethora of different ways. You can invert all the colours if you like. It’s still the same underlying data. Modifying how strongly certain hues are shown, or correcting for known distortion are just techniques to visualise the data in a clearer way.

      “Generative AI” is essentially just non-predictive extrapolation based on some data set, which is a completely different ball game, as you’re essentially making a blind guess at what could be there, based on an existing data set.

      • Richard@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        making a blind guess at what could be there, based on an existing data set.

        Here’s your error. You yourself are contradicting the first part of your sentence with the last. The guess is not “blind” because the prediction is based on an existing data set . Looking at a half occluded circle with a model then reconstructing the other half is not a “blind” guess, it is a highly probable extrapolation that can be very useful, because in most situations, it will be the second half of the circle. With a certain probability, you have created new valuable data for further analysis.

        • UnpluggedFridge@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          But you are not reporting the underlying probability, just the guess. There is no way, then, to distinguish a bad guess from a good guess. Let’s take your example and place a fully occluded shape. Now the most probable guess could still be a full circle, but with a very low probability of being correct. Yet that guess is reported with the same confidence as your example. When you carry out this exercise for all extrapolations with full transparency of the underlying probabilities, you find yourself right back in the position the original commenter has taken. If the original data does not provide you with confidence in a particular result, the added extrapolations will not either.

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago

            And then circles get convictions so even if the model did somehow start off completely unbiassed people are going to start feeding it data that weighs towards finding more circles since a prosecution will be used as a ‘success’ to feed back into the model and ‘improve’ it.

        • CapeWearingAeroplane@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 months ago

          Looking at a half circle and guessing that the “missing part” is a full circle is as much of a blind guess as you can get. You have exactly zero evidence that there is another half circle present. The missing part could be anything, from nothing to any shape that incorporates a half circle. And you would be guessing without any evidence whatsoever as to which of those things it is. That’s blind guessing.

          Extrapolating into regions without prior data with a non-predictive model is blind guessing. If it wasn’t, the model would be predictive, which generative AI is not, is not intended to be, and has not been claimed to be.

    • dual_sport_dork 🐧🗡️@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 months ago

      None of your examples are creating new legitimate data from the whole cloth. They’re just making details that were already there visible to the naked eye. We’re not talking about taking a giant image that’s got too many pixels to fit on your display device in one go, and just focusing on a specific portion of it. That’s not the same thing as attempting to interpolate missing image data. In that case the data was there to begin with, it just wasn’t visible due to limitations of the display or the viewer’s retinas.

      The original grid of pixels is all of the meaningful data that will ever be extracted from any image (or video, for that matter).

      Your wedding photographer’s picture actually throws away color data in the interest of contrast and to make it more appealing to the viewer. When you fiddle with the color channels like that and see all those troughs in the histogram that make it look like a comb? Yeah, all those gaps and spikes are actually original color/contrast data that is being lost. There is less data in the touched up image than the original, technically, and if you are perverse and own a high bit depth display device (I do! I am typing this on a machine with a true 32-bit-per-pixel professional graphics workstation monitor.) you actually can state at it and see the entirety of the detail captured in the raw image before the touchups. A viewer might not think it looks great, but how it looks is irrelevant from the standpoint of data capture.

      • Richard@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        They talked about algorithms used for correcting lens distortions with their first example. That is absolutely a valid use case and extracts new data by making certain assumptions with certain probabilities. Your newly created law of nature is just your own imagination and is not the prevalent understanding in the scientific community. No, quite the opposite, scientific practice runs exactly counter your statements.

    • Natanael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      This is just smarter post processing, like better noise cancelation, error correction, interpolation, etc.

      But ML tools extrapolate rather than interpolate which adds things that weren’t there