Sad to think he went from this to endorsing arguably the largest purveyor of conspiracy theories in American history: https://www.space.com/space-exploration/apollo/apollo-11-moonwalker-buzz-aldrin-endorses-trump-for-president
Sad to think he went from this to endorsing arguably the largest purveyor of conspiracy theories in American history: https://www.space.com/space-exploration/apollo/apollo-11-moonwalker-buzz-aldrin-endorses-trump-for-president
My only regret is that I have boneitis
Did you get that thing I sent ya?
I don’t know about a guide, but I believe it’s still possible to rip 4K HDR using an HDCP downconverter . HDR/DV data is included over HDMI, the problem is that it’s all encrypted (along with the 4k stream itself) with HDCP 2 which isn’t publicly broken yet. This box (and others like HDfury) does some tricks to force a fall back to HDCP 1, which has been broken for a long time, so you should then just need a capture card that supports it.
Scene releases may have better/faster techniques depending on the streaming platform, but they probably wouldn’t talk about them if they did.
Reminds me of the Kanda Myojin Shrine in Tokyo, where you can get charms to keep the evil spirits out of your computers
There used to be one called Subgraph, but the project is dead, I think. Hope someone will pick it back up, because there’s certainly people who’d prefer to sacrifice the security benefits for performance, while keeping the isolation/networking parts of it.
Qubes, for their part, is pretty clear this won’t happen, but that seems pretty reasonable given their project goals.
Well, I certainly made some assumptions about your workload that may not be true. What do you use it for?
As bulk storage (backups, media streaming, etc), random 4k reads are usually not going to be the limiting factor, except maybe for the occasional indexing by Plex etc. If you’ve instead got lots of small files you’re accessing, or are hosting something like a busy database/web server on here, then you could see significant boost, but not anywhere near as significant as just co-locating the service and the data. If your workload involves a lot of writes, then I would stay away. The MTTF on “cacheless” SSDs is pretty garbage, which seems like the biggest issue to me.
Also, didn’t mean to suggest buying nicer drives, just using an older one I was familiar with as reference. I recently bought the 2TB 970 evo plus on sale for $80 each, which was in your price range, but not sure if that pricing made it to the UK.
You probably don’t need that kind of read/write performance in your average NAS because you’re almost certainly going to be network limited. Not sure what the specs on these cheap ones are, but something like a Samsung 970 evo from a few years ago would more than saturate a 10g link, so doubling that wouldn’t really help.
That said, I recently built a 4 M.2 drive raid0 on my homelab server for some read heavy workloads, and things scaled close to how you’d expect with just mdadm+ext4 (about 80% of the drives’ theoretical maximum bandwidth in fio test). If you can actually use the extra IOPS or disk bandwidth, it works pretty well and was easy to do.
That and Sprint LTE in 2024