TLDR: I am running some Docker containers on a homelab server, and the containers’ volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it’s slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be “bad practice” to separate CPU and storage this way? Isn’t that pretty much what a data center also does?

  • Molecular0079@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Mmm, not quite. I am not familiar with how picoshare works exactly, but according to the picoshare docker README, it uses the data volume to store its application sqlite database. My original suggestion is that the Docker application and its application data (configs, cache, local databases, and other files critical to the functioning of the application) should be on the same machine, but the images and other files that picoshare shares can be remote.

    Basically, my rule is to never assume that anything hosted on another machine will be guaranteed to be available. If you think picoshare can still work properly when its sqlite database gets ripped out without warning, then by all means go for it. However, I don’t think this is the case here. You’ll risk the sqlite database getting corrupted or the application itself erroring out if there’s ever a network outage.

    For example, with the Jellyfin docker image, I would say that the cache and config volumes have to be local, while media can be on a remote NAS. My reasoning is that Jellyfin is built to handle media files changing / adding / disappearing. It is however, not built to gracefully handle its config files and caches disappearing in the middle of operation.