A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    18
    ·
    8 months ago

    AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.

    The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.

    • jeeva@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 months ago

      I don’t think loss is what people are worried about, really - more injecting details that fit the training data but don’t exist in the source.

      Given the hoopla Hollywood and directors made about frame-interpolation, do you think generated frames will be any better/more popular?

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        In the context of video encoding, any manufactured/hallucinated detail would count as “loss”. Loss is anything that’s not in the original source. The loss you see in e.g. MPEG4 video usually looks like squiggly lines, blocky noise, or smearing. But if an AI encoder inserts a bear on a tricycle in the background, that would also be a lossy compression artifact in context.

        As for frame interpolation, it could definitely be better, because the current algorithms out there are not good. It will not likely be more popular, since this is generally viewed as an artistic matter rather than a technical matter. For example, a lot of people hated the high frame rate in the Hobbit films despite the fact that it was a naturally high frame rate, filmed with high-frame-rate cameras. It was not the product of a kind-of-shitty algorithm applied after the fact.

    • DarkenLM@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      8 months ago

      I don’t think AI codecs will be anything revolutionary. There are plenty of lossless codecs already, but if you want more detail, you’ll need a better physical sensor, and I doubt there’s anything that can be done to go around that (that actually represents what exists, not an hallucination).

      • foggenbooty@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        It’s an interesting thought experiment, but we don’t actually see what really exists, our brains essentially are AI vision, filling in things we don’t actually perceive. Examples are movement while we’re blinking, objects and colors in our peripheral vision, the state of objects when our eyes dart around, etc.

        The difference is we can’t go back frame by frame and analyze these “hallucinations” since they’re not recorded. I think AI enhanced video will actually bring us closer to what humans see even if some of the data doesn’t “exist”, but the article is correct that it should never be used as evidence.

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        There are plenty of lossless codecs already

        It remains to be seen, of course, but I expect to be able to get lossless (or nearly-lossless) video at a much lower bitrate, at the expense of a much larger and more compute/memory-intensive codec.

        The way I see it working is that the codec would include a general-purpose model, and video files would be encoded for that model + a file-level plugin model (like a LoRA) that’s fitted for that specific video.

      • Hexarei@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Nvidia’s rtx video upscaling is trying to be just that: DLSS but you run it on a video stream instead of a game running on your own hardware. They’ve posited the idea of game streaming becoming lower bit rate just so you can upscale it locally, which to me sounds like complete garbage

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I think there’s a possibility for long format video of stable scenes to use ML for higher compression ratios by deriving a video specific model of the objects in the frame and then describing their movements (essentially reducing the actual frames to wire frame models instead of image frames, then painting them in from the model).

        But that’s a very specific thing that probably only work well for certain types of video content (think animated stuff)