I should’ve used it sooner rather than last year when they announced AI integration to Windows. Every peripheral I tried is just worked without needing to install drivers, and it works better and faster than on Windows, just like today when I tried to use my brother’s 3D printer expecting disappointment, but no, it just connected and was ready to print right away (I use Ultimaker Cura), whereas on my brother’s Windows computer I have to wait like 20 seconds; sometimes I have to disconnect and reconnect it again for it to see and ready to use. Lastly, for those who are wondering, I use Vanilla Arch (btw), and sorry for bad English.

  • AdaA
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    22 hours ago

    I take pride in capturing the image, not relying on software to recreate it the way I wish it had been shot

    Unless you’re shooting flat JPGs with no photo modes enabled, and not doing any post processing, then you’re not getting that result. And even if you do that, two cameras shooting the same scene will produce different images, because the process of converting RAW sensor data to the reduced colour palette and bit depth of a JPG image, involves an algorithm deciding how best to recreate (not capture) what you saw with your eye, and no two cameras do it the same way, and neither produce a “true” capture of what you saw.

    Ultimately, it’s a meaningless distinction. My camera does in body image compositing, using firmware to stack multiple frames in to a single exposure, giving you light trails, without overexposed static light sources. It uses AI subject recognition to drive its auto focus. It has a 120frame buffer than records records directly to the buffer whilst holding the shutter button half down, and then writes them all to the card when you press, effectively letting you capture moments that you would normally have missed, because human reflexes are imperfect. And the RAW software that comes with the camera literally uses AI noise reduction.

    So for me to draw the line and say that AI driven noise reduction (non generative AI at that) is a problem would be a bit hypocritical of me.

    As it is, the camera hardware itself does solid noise reduction on the JPGs it produces (using algorithms built in to the firmware) giving really nice results even at high ISOs. But the only way to replicate that with a RAW file, is using the camera supplied RAW software (which doesn’t work on linux), or by using a 3rd partyAI noise reduction app (which don’t work on linux). If I don’t use them, then I’m in the strange situation where my high ISO JPG preview photos look better than an end to end post processed RAW file.

    If I was “embracing the flaws that my camera creates” I would be shooting in JPEG mode, using images mostly straight out of the camera, and they would be less noisy than what I can achieve with current linux tools.

    I’ve been doing this for 20 years, and using m43 (or four thirds before it) for most of that time. I know what I want from my photography, and I know the tools that give it to me. What I want is for the image to look like the scene that I saw. I don’t care if it’s a pixel perfect match for it. I don’t care about embracing the flaws that a camera introduces, flaws that don’t exist when viewed through the human eye (reduced dynamic range, sensor noise etc), out of some sense of “purity”. Purity that was lost the moment I pressed the shutter on a digital camera that has to encode the image in software to make it visible.

    • pizzaboi@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Fair enough! Thanks for sharing that. I think there’s a beauty in photography that we can each create in our own way, and that the process is part of the photographer’s expression, despite the viewer knowing none of that.