thank you for scheduling an hour-long lecture on filesystems

  • Yote.zip@pawb.socialOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    ZFS will do the same thing as BTRFS in regards to wear and tear, so swapping to BTRFS is almost certainly not the answer in your case. Since you’re using it as a cache pool I’m not quite sure what the performance implications of these features are, but the two main things related to wear and tear are:

    • Transparent compression: This basically means that when you write a file to the drive, ZFS will first compress it using your CPU, then write the compressed file to the drive instead. This file will be smaller than your original file, so it won’t take as much space on the drive. Because it doesn’t take as much space on the disk, you limit the number of bits being written and subsequently limit the wear and tear. When you read the file again, it decompresses it back to its normal state, which also limits the number of bits being read from the drive. The standard compression algorithm for this task is usually ZSTD, which is really good at compressing while still being very fast. One of the most interesting parts about ZSTD is that no matter how much effort you spend compressing, decompression speed will always be the same. I tend to refer to this benchmark as an approximation for ZSTD effort levels with regards to transfer speeds - the benchmark is done with BTRFS but the results will translate when enabling this feature on ZFS. Personally I usually run ZSTD level 1 as a default, though level 2 is also enticing. If you’re not picking 1 or 2, you probably have a strong usecase to be going somewhere closer to ~7. This is probably not enabled by default, so just check if it’s enabled for you and enable it if you want. Keep in mind that it will decrease transfer speed if the drive itself is not the bottleneck - HDDs love this feature because compressing/decompressing with your CPU is much faster than reading/writing to spinning rust. Also keep in mind that ZSTD is not magic, and it’s not going to do much to already compressed files - photos, music, video, archives, etc.

    • Copy on write: Among other things, this mostly means that when you copy a file to the same drive, it will be instant - logical file markers will just be created to point at the previous physical file blocks. We don’t need to read the full file, and we don’t need to re-write the full file. In your usecase of cache pool I don’t think this will affect much, since any duplicate hits are probably just getting redirected to the previous copy anyway? This should be the default behavior of ZFS tmk, so I don’t think you need to check that it’s enabled or anything.

    • kopper [they/them]
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Personally I usually run ZSTD level 1 as a default, though level 2 is also enticing. If you’re not picking 1 or 2, you probably have a strong usecase to be going somewhere closer to ~7.

      i like level 3 because you enable it by putting :3 in the mount options :3

      • Yote.zip@pawb.socialOP
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Apologies, this is the only correct answer from now on. Thank you for helping me to see the light.