I was hoping the distros would just do the scrub/balance work for you - makes it no effort then! Good to know OpenSUSE does it for ya. Searching it looks like Fedora doesn’t have anything built in sadly, but the posts are +1 yr old so maaaybe they’ve done something.
- 238 Posts
- 546 Comments
It’s great for single drive, raid 0, and raid 1. Don’t use it for more raid, it is not acceptable for that (raid 10 obv ok). It still can lose data for raid 5/6 still.
I’m not sure of the tools that Fedora includes to manage BTRFS but these scripts are great https://github.com/kdave/btrfsmaintenance you use them to scrub and balance. Balance is for redistributing blocks and scrub checks if bits have unexpectedly changed due to bit rot (hardware issue or cosmic ray). Scrub weekly for essential photos, important docs, and the like. Monthly for everything else. Balance monthly, or on demand if free drive space is tight and you want a bit more bits.
RAID 1 will give you bit rot detection with scrub and self-recover said bit rot detection (assuming both drives don’t mystically have the same bit flip, which is very unlikely). Single drive will just detect.
BTRFS snapshot then send/receive is excellent for a quick backup.
Remember that a BTRFS snapshot will keep all files in the snapshot, even if you delete them off the live drive. Delete 500 GB of stuff, but the space didn’t reduce? Probably a snapshot is remembering that 500 GB. Delete the snapshot and your space is back.
You can make sub volumes inside a BTRFS volume, which are basically folders but you can snapshot just them. Useful for scrubbing your essential docs folder more often than everything else, or snapshotting more often too.
Lastly, you can disable copy-on-write (cow) for volumes. Reduces their safety but increases write speed, good for caches and I’ve read VM drive images need it for performance.
Overall, great. Built-in and no need to muck with ZFS’s extra install steps, but you get the benefits ZFS has (as long as you’re ok to be limited to RAID 1)
glizzyguzzlerto Selfhosted@lemmy.world•GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust.English2·2 months agoOdd, I’ll try to deploy this when I can and see!
I’ve never had a problem with a volume being on the host system, except with user permissions messed up. But if you haven’t given it a user parameter it’s running as root and shouldn’t have a problem. So I’ll see sometime and get back to you!
glizzyguzzlerto Selfhosted@lemmy.world•GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust.English2·2 months agoThat’s pretty damn clever
glizzyguzzlerto Selfhosted@lemmy.world•GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust.English4·2 months agoI try to slap anything I’d face the Internet with with the read_only to further restrict exploit possibilities, would be abs great if you could make it work! I just follow all reqs on the security cheat sheet, with
read_only
being one of them: https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.htmlWith how simple it is I guessed that running as a
user
and restrictingcap_drop: all
wouldn’t be a problem.For
read_only
many containers just needtmpfs: /tmp
in addition to the volume for the db. I think many containers just try to contain temporary file writing to one directory to make applyingread_only
easier.So again, I’d abs use it with
read_only
when you get the time to tune it!!
glizzyguzzlerto Selfhosted@lemmy.world•GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust.English5·2 months agoLooks awesome and very efficient, does it also run with
read_only: true
(with a db volume provided, of course!)? Many containers just need a /tmp, but not always
I trust the check
restic -r '/path/to/repo' --cache-dir '/path/to/cache' check --read-data-subset=2000M --password-file '/path/to/passfile' --verbose
. The--read-data-subset
also does the structural integrity while also checking an amount of data. If I had more bandwidth, I’d check more.When I set up a new repo, I restore some stuff to make sure it’s there with
restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' restore latest --target /tmp/restored --include '/some/folder/with/stuff'
.You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn’t lazy I guess, just to make sure I’m not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.
Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there
sudo mkdir -p '/mnt/restic'; sudo restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' mount '/mnt/restic'
.
I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.
I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.
I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.
glizzyguzzlerto Selfhosted@lemmy.world•Proxmox vs. Debian: Running media server on older hardwareEnglish4·2 months agoI’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.
Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.
Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.
On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.
So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.
If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.
Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.
Hell yeah bröther
I do this but with root Docker, every service gets a user:####:#### and that #### is tied to a useradd I made. Chown the data directory the container is given and it just works. In Docker this does not work for Linuxserver images but podman has way more user: specifications so I have a feeling Linuxserver images will work there with the user restrictions.
For something like Gotenberg which is part of paperless ngx I gave Gotenberg its own user too, it has chromium and might ingest a malicious pdf somehow or something. Might as well keep Gotenberg from being able to hose the rest of paperless!
I do plan to move to podman with 5.0+ in Debian 13 and that will remove the Docker daemon attack surface and the occasional firewall issues that come with Docker. So I’m not advocating for Docker over podman here.
I wish too for an in-depth blog post, but the github answer is at least succinct enough
This answers all of your questions: https://github.com/containers/podman/discussions/13728 (link was edited, accidentally linked a redhat blog post that didn’t answer your Q directly but does make clear that specifying a user in rootless podman is important for security for the user running the rootless podman container if that user does more than just run the rootless podman container).
So the best defense plus ease of use is podman root assigning non-root UIDs to the containers. You can do the same with Docker, but Docker with non-root UIDs assigned still caries the risk of the root-level Docker daemon being hacked and exploited. Podman does not have a daemon to be hacked and exploited, meaning root Podman with non-root UIDs assigned has no downsides!
I would trust my life to this genius math dog’s calculations
Look, I’m not perverted, I’m just Italian
glizzyguzzlerto News@lemmy.world•Girl Scouts sued over alleged heavy metals, pesticides in cookiesEnglish15·2 months agoThis is shit, I looked at the EU limits on cadmium/lead per the lab reports https://gmoscience.org/wp-content/uploads/2025/01/GSC-HeavyMetalsReports.pdf and EU limits https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32023R0915 (mg/kg == ppm, ug/kg == ppb) and their heavy metal amounts are very low.
For the aluminum the EU recommends 1 mg/kg per week on avg - but this EU report makes clear that ~10 mg/kg in baked goods is the norm https://efsa.onlinelibrary.wiley.com/doi/epdf/10.2903/j.efsa.2008.754 . So that’s even fine.
I don’t care to go into the pesticides but since the metal levels are good to fine but presented as horrendous, I would suspect the pesticide levels are overinflated as well.
I got my parents to get a NAS box, stuck it in their basement. They need to back up their stuff anyway. I put in 2 18 TB drives (mirrored BTRFS raid1) from server part deals (peeps have said that site has jacked their prices, look for alts). They only need like 4 TB at most. I made a backup samba share for myself. It’s the cheapest symbology box possible, their software to make a samba share with a quota.
I then set up a wireguard connection on an RPi, taped that to the NAS, and wireguard to the local network with a batch script. Mount the samba share and then use restic to back up my data. It works great. Restic is encrypted, I don’t have to pay for storage monthly, their electricity is cheap af, they have backups, I keep tabs on it, everyone wins.
Next step is to go the opposite way for them, but no rush on that goal, I don’t think their basement would get totaled in a fire and I don’t think their house (other than the basement) would get totaled in a flood.
If you don’t have a friend or relative to do a box-at-their-house (peeps might be enticed with reciprocal backups), restic still fits the bill. Destination is encrypted, has simple commands to check data for validity.
Rclone crypt is not good enough. Too many issues (path length limits, password “obscured” but otherwise there, file structure preserved even if names are encrypted). On a VPS I use rclone to be a pass-through for restic to backup a small amount of data to a goog drive. Works great. Just don’t fuck with the rclone crypt for major stuff.
Lastly I do use rclone crypt to upload a copy of the restic binary to the destination, as the crypt means the binary can’t be fucked with and the binary there means that is all you need to recover the data (in addition to the restic password you stored safely!).