I’m in the process of setting up backups for my home server, and I feel like I’m swimming upstream. It makes me think I’m just taking the wrong approach.
I’m on a shoestring budget at the moment, so I won’t really be able to implement a 3-2-1 strategy just yet. I figure the most bang for my buck right now is to set up off-site backups to a cloud provider. I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.
I then decided I would instead cherry-pick my backup locations instead. Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.
So, now I’m configuring a docker-db-backup container to back each one of them up, finding database containers and SQLite databases and configuring a backup job for each one. Then, I hope to drop all of those dumps into a single location and back that up to the cloud. This means that, if I need to rebuild, I’ll have to restore the containers’ volumes, restore the backups, bring up new containers, and then restore each container’s backup into the new database. It’s pretty far from my initial hope of being able to restore all the files and start using the newly restored system.
Am I going down the wrong path here, or is this just the best way to do it?
Yeah, I keep everything as simple as possible. Everything is containerized, and all the configs live in one directory and they store their data on my RAID. I don’t need to go track down configs across the system, and adding a new service doesn’t require any backup config so no risk of forgetting something.
Docker is simple. You map directories in the container to directories on your host, so you put the important data where it’ll get backed up and the less important data (e.g. logs) where it won’t.