• Anafabula@discuss.tchncs.de
    link
    fedilink
    arrow-up
    16
    ·
    7 months ago

    Jpeg xl is pretty good and I’m still pissed Google deleted it from Chromium before it could even try to get any market share.

    Jpeg xl can do everything jpeg & png can do, but more efficiently.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    6
    ·
    7 months ago

    There’s not much to improve on PNG. It’s essentially a zipped up BMP with optional filters to rearrange pixels in a way that will hopefully lead to better compression at the zip stage.

    Last time I tested this, if you used xz or zstd to compress a BMP with max settings, it made smaller files than PNG.

    WebP and JXL is where it’s at.

    • pewgar_seemsimandroidOP
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      ny PNG elitist brain interprets this as PNG being so good than JpeG had to be upgraded and a new format (WebP)

      • Max-P@lemmy.max-p.me
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        WebP solves use cases for both PNG and JPEG as the same format can be lossless or lossy, while getting the benefits of a much more powerful codec.

        PNG is good for certain types of graphics. Take a full size 48MP picture and encode it in PNG and it’s going to be massive compared to JPEG.

        PNG is a pretty simple and effective format but it’s not especially good nowadays, there’s a reason WebP is popular. Much smaller files for the same quality. Same for JPEG XL.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    Quite Okay Imaging. One-page spec, comically fast, similar ratios.

    I got thoroughly nerd-sniped by the same guy’s Quite Okay Audio format, because he did a much less mic-drop job of that one. The target bitrate is high, for a drop-in replacement on MP3 or Vorbis, and the complexity level is… weird. QOI is shockingly simple; QOA involves brute force and magic numbers. I thought I could do better.

    I embraced the 64-bit “frame” concept, did some Javascript for encoding and decoding as a real-time audio filter, and got one-bit samples sounding pretty good… in some contexts. Basically I implemented delta coding. Each one-bit sample goes up or down by a value specified in that frame - with separate up and down values, defined in log scale, using very few bits. Searching for good up/down values invites obsession but works fine with guess-and-check because each dataset is tiny. I settled on simulated annealing.

    Where I ditched this is shortly after doing double-delta coding. So instead of a 0 making sample N+1 be sample N plus the Down value, sample N+1 is always sample N plus the Change value, and a 0 makes Change equal Change plus the Down value. This turns out to be really good at encoding a wiggly line, one millisecond at a time. If it’s low-frequency. Low frequencies sound fantastic. Old-timey music? Gorgeous, slight hiss. Speech? Crystal clear. Techno? Complete honking garbage. Hilariously bad. Throw a high sine wave at delta coding and you get noise. Double delta coding, you get pleasant noise, but it’s still nonsense bearing little resemblance to the input. It’s not even a low-pass filter; the encoding method just chokes.

    The clear fix would be re-implementing an initial test, where you specify high and low absolute values, and your one-bit samples just pick between them. It’s naive carried-error quantization and it sounds like a child’s toy that’s never getting new batteries. But I’d do it alongside the delta options. Selecting which approach produces the least error would be done per-millisecond. You’d get occasional artifacting instead of mangled output or constant buzzing. I just ran out of steam and couldn’t be arsed.