Hey guys,
I want to shred/sanitize my SSDs. If it was a normal harddrive I would stick to ShredOS / nwipe, but since SSD’s seem to be a little more complicated, I need your advice.
When reading through some posts in the internet, many people recommend using the software from the manufacturer for sanitizing. Currently I am using the SSD SN850X from Western digital, but I also have a SSD 990 PRO from Samsung. Both manufacturers don’t seem to have a specialized linux-compatible software to perform this kind of action.
How would be your approach to shred your SSD (without physically destroying it)?
~sp3ctre
OK there are 2 completely opposite thoughts on shredding SSDs
-
All SSDs have a trim functionality so any unused data gets set to 0 automatically by the os or in some cases by ssd controller
-
Even if trim sets it to zero there is always some deviation from the original zero and a very very sophisticated attacker can find the actual data. And simply using shred or /dev/zero doesn’t help because SSD controller always writes to different physical location even for same file. And the only real way to ensure data can’t be recovered is to smash it
Pick and choose depending on your threat model. If you’re just selling it to someone or you know that no nation state actors are after your data then just do normal delete and then do the trim. If you think someone with capabilities is after your data and that they are willing to spend few hundred thousand dollars or even few million for whatever data is in your SSD then just microwave it and then smash with hammer. No need to shred or zero.
-
So much bad advice in here relating to NVME’s.
Any NVME worth it’s salt these days is an OPAL adhering self encrypting capable drive for data storage.
This means in Linux you simply install nvme-cli, then do a mode 2 crypto erase and the crypto key is dropped and all data on the drive becomes unreadable.
Y’all could stand to get with the times a bit more and learn about what NVME’s actually bring to the table
https://tinyapps.org/docs/nvme-secure-erase.html
For drives with it disabled, mode 1 wipe will have the controller fill all regions with meaningless data to wipe it.
If you ever need a reeally stupid way to sanitize deleted data without special privileges, just fill the disk up with some files then delete them. On Linux this is easy with cat and /dev/zero or urandom. Can’t be sure it gets everything but it’s better than doing nothing.
Thankfully it is largely just a few commands with built in tools to tell the drive firmware to secure erase
SATA SSD: https://acceptdefaults.com/2023/01/06/secure-erase-an-ssd/
NVME SSD: https://acceptdefaults.com/2022/08/11/secure-erase-an-nvme-drive/
This. And then when it’s done, use a hex editor and look at the raw disk to make sure it actually worked. Some manufacturers don’t implement it properly.
I just shove them into a grinder…
Sorry, but can you explain a little, how this is done exactly? What should I see, when everything worked correctly?
Preferably all zeroes, possibly random data or a fixed string. Certainly not anything readable.
According to the upvotes, this seems to be the way. I will try that, thank you!
for future reference, encrypt your drives from the get-go. even if it’s not a mobile device, you can use on-device keys to unlock it without a pass-phrase.
source: used
shred
on a couple of 3.5" 4 TB drives before selling them, took ages…I will take that into consideration. I already encrypted my older laptop (hard drive) with LUKS. Is there something special, when it comes to encrypting SSD’s? Do you experience speed losses of SSD after doing so?
every mobile device I ever owned is encrypted and protected with a reasonably secure pass-phrase so losing it is no big deal. it is conceivable someone could forensic the shit out of my setup but that is highly unlikely; it’s far more likely it’ll get wiped and sold or parted out.
I’ve done no benchmarks but I haven’t experienced any issues ever. the oldest linux device I own is a 2011 MBP (i7-2635qm, so quadcore) and I don’t perceive any speed degradation; it’s possible 1st gen Core i5/i7 could have issues as those don’t have AES-NI in hardware or sumsuch plus they’re SATA2 only, but those would be 15+ years old at this point.
with btrfs that has on-the-fly compression, copy-on-write, and deduping, everything works seamlessly, even when I have database-spanking applications in local development.
so the only thing I’ve changed recently is encrypting every device I have, not just the mobile ones. the standalone devices get unlocked with a key-file from the local filesystem so they boot without the prompt. selling/giving away any of those drives, mechanical or SSD, is now a non-issue.
Everyone has given Linux answers, its also worth knowing quite a lot of UEFI’s contain the ability to secure erase as well. There are a number of USB bootable disk management tools that can do secure erase as well.
Good to know. Turns out that linux users are not lost when it comes to this topic!
Don’t ever write any really private data to the SSD in cleartext. Use an encrypted file system. “Erase” by throwing away the key. That said, for modern fast SSD’s the performance overhead of the encryption might be a problem. For the old SATA SSD in my laptop, I don’t notice it.
That said, for modern fast SSD’s the performance overhead of the encryption might be a problem.
How so? I’ve been running LUKS on modern NVMEs for years and there is just the same maybe at worst 10% hit in write/read speeds.
That’s also my experience. There isn’t really any noticeable performance hit, even on modern SSDs. It should be the same amount of data coming from the SSD anyway, since the SSD isn’t even the part doing the cryptography (with LUKS), so it shouldn’t have any effect. And the CPU handles the decryption just fine
There is no discernible performance hit
i was just in the same situation and stumbled upon an issue, but more on that later.
nvme cli might be what you are looking for, the arch wiki has got a good tutorial https://wiki.archlinux.org/title/Solid_state_drive/NVMe
but: it depends on how you connect the nvme ssd to your system. if you use an external enclosure/adapter the nvme would show up as /dev/sda because it is not attached to the proper interface. i did not manage to get nvme cli to work in that case.
but, and that is the much easier way to solve that issue: the bios is your friend. most modern bios provide a secure erase nvme option. so just stick the nvme ssd in your computer and try to do it that way. only took a couple of seconds due to the way it works on nvme ssds as far as i understand.
that said: i’ve got two older WD nvme ssds that i could’t format using four (!) different PC’s and their bioses, non of the manufactures software for windows or bootable usb sticks would work either (tried the latter on three different PC’s) and the external enclosure solution provided no help either. only the bios of an older asrock board easily accepted the nvme and managed to secure erase it. no clue what went wrong the first times but at least they are clean now.
I use nwipe at work: https://github.com/martijnvanbrummelen/nwipe. I have no idea if it’s better or worse than the methods already being discussed here, but its been my preference for years.
Simple solution is to use
cryptsetup
to encrypt it, forget the key, and optionally overwrite the first megabyte or so of the disk (where the LUKS header is).Just use
blkdiscard
.Educate me.
My response would normally be:
dd if=/dev/random of=/dev/sdX ba=1024M
, followed by async
. Lowest common denominator nearly always wins in my book over specialty programs that aren’t part of minimal core; tools that also happen to be in BusyBox are the best.What makes this situation special enough for something more complex than
dd
? Do SSDs not actually save the data you tell them to? I’m trying to guess at how writing a disk’s worth of garbage directly to the device would fail. I’m imagining some holographic effect, where you can actually store more data than the drive holds. A persistent, on disk cache that has no way of directly affecting, but which can somehow be read later and can hold latent data?If I were really, I’d dd, read the entire disk to /dev/null, then dd again. How would this not be sufficient?
I’m honestly trying to figure out what the catch is, here, and why this was even a question - OP doesn’t sound like a novice.
Well, first I need to note that
blkdiscard
is not more secure. But it is much more faster. It does not actually wipe flash memory, it just tells the controller to mark it as unused. So it will drop stored data at the moment it decides the best. Maybe immediately, maybe just before writing new data. But anyway it wont provide ability to read it. It would be still possible if you can get direct access to the flash memory bypassing the controller.Second, you forgot that SSDs are not HDDs and data are not stored exactly at offset you write them. The controller remaps memory blocks as needed. And it has more blocks than actually available to user. So when you use
dd
(orcp
, or any other program writing directly to block device) you only override blocks that are actually mapped, but some blocks can still keep old data. So usingdd
is also not secure in case someone can get direct access to the flash memory. But it takes much longer time and reduces the flash lifetime.Several people here mentioned a secure erase feature of SSDs. I didn’t know about it. It should be more secure than both methods if implemented correctly by the manufacturer (i. e. clears all memory cells immediately). In the worst case it could be the same as
blkdiscard
, I guess.SSDs don’t store the data like HDDs, where you’d overwrite the same part on a magnetic platter. The controller on a SSD will instead handle it, do some magic and decide what to do. So if you use
dd
to replace some part with zeros, it might instead invalidate the old data, allocate new memory to you and not really overwrite anything. That’s why SSDs have separate commands for wiping content.I’d say google for “ssd secure erase”:
I agree dd isn’t useful for individual files. I contend that if I have an SSD of size X, and I write X amount of random bytes to it, there’s nothing magic about the SSD construction that will preserve any previous information on the drive. Wear leveling can not magically make the drive store more data than it can hold.
Well, in fact it can. That’s “overprovisioning”. The SSD has some amount of reserved space as replacement for bad cells, and maybe to speed things up. So if you overwrite 100% of what you’ve access to on the SSD, you’d still have X amount of data you didn’t catch. But loosely speaking you’re right. If you overwrite the entire SSD and not just files or one partition or something like that, you’d force it to replace most of the content.
I wouldn’t recommend it, though. There is secure erase, blkdiscard and some nvme format commands which do it the right way. And ‘dd’ is just a method that get’s it about right (though not 100%) in one specific case.Hum. I read that
blkdiscard
only marks the blocks (cells?) as empty, and doesn’t change the contents; and that a sophisticated enough lab can still read the bits.In particular, the disk has to claim to support “Deterministic read ZEROs after TRIM”; if it doesn’t, you have no guarantee of erasure. Without knowing anything about the make and model,
blkdiscard
would be categorically less secure.Right?
Yes, thanks. Just invalidating or trimming the memory doesn’t cut it. OP wants it erased so it needs to be one of the proper erase commands. I think blkdiscard also has flags for that, so I believe you could do it with that command as well, if it’s supported by the device and you append the correct options. (zero, secure) I think other commands are easier to use (if supported).
I did read (on the Arch wiki) that
blkdiscard -z
is identical todd if=/dev/zero
, so that tracks. It’s (blkdiscard
) is easier to use. However, given my memory and how infrequently I’ll ever use it, I’ll have forgotten the name of the command by next week. I’ll never forgetdd
, though, mainly because it’s more general purpose and I use it occasionally.OP probably wants
blkdiscard -z
, though.
But it can store more data than it tells you it can. All drives are actually lying about their capacity; they all have extra sectors to replace bad ones.
Not that much extra.
Enough to not consider it securely erased.
TIL! Or should I say TILL! (Today I learned (more about) Linux)
Use secure erase function which is built into the SATA and other specs, it applies a voltage spike to clear the cells of all held charges thus wiping them. This happens near instantly, it’ll be a process that will signal it’s finished within a minute and takes much less time than that.
If you want to be extra paranoid I suppose you could follow that up by encrypting the entire (empty) drive and then doing it again though I’m not sure this has any benefit however it’s the closest to forcing the cells to be used again and then cleared again. However this does not guarantee that exhausted and worn out areas are flash are not potentially spared both. It’s unlikely for large amounts of data to be recovered from this unless your drive is failing or has been completely worn out but it’s also why if you ever store sensitive data on an SSD it’s preferable to do so in an encrypted form (such as encrypting the whole disk or partition).
I did some light reading. I see claims that wear leveling only ever writes only to zeroed sectors. Let me get this straight:
If I have a 1TB ssd, and I write 1TB of SecretData, and then I delete and write 1TB of garbage to the disk, it’s not actually holding 2TB of data, with the SecretData hidden underneath wear leveling? That’s the claim? And if I overwrite that with another 1TB of garbage it’s holding, what now, 3TB of data? Each data sequence hidden somehow by the magic of wear leveling?
Skeptical Ruaraidh is skeptical. Wear leveling ensures data on an SSD is written to free sectors with the lowest write count. It can’t possibly be retaining data if data the maximum size of the device is written to it.
I see a popular comment on SO saying you can’t trust
dd
on SSDs, and I challenge that: in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data. Otherwise, someone’s invented the storage version of a perpetual motion device. To be safe, sync and read it, and maybe dumb again, but I really can’t see how an SSD world hold more data than it can.dd if=/dev/random of=/dev/sdX bs=2048 count=524288
If you’re clever enough to be using zsh as your shell:
repeat 3 (dd if=/dev/random of=/dev/sdX bs=2048 count=524288 ; sync ; dd if=/dev/sdX ba=2048)
You reduce every single cell’s write lifespan by 2 times; with modern life spans of 3,000-100,000 writes per cell, it’s not significant.
Someone mentioned
blkdiscard
. If you really aren’t concerned about forensic analysis, this is probably the fastest and least impactful answer: it won’t affect cell lives by even a measly 2 writes. But it also doesn’t actually remove the data, it just tells the SSD that those cells are free and empty. Probably really hard to reconstruct data from that, but also probably not impossible.dd
is a shredding option: safer, slower, and with a tiny impact on drive lifespan.in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data.
Your conclusion is incorrect because you made the assumption that the SSD has exactly the advertised storage or infinite storage. What if it’s over-provisioned by a small margin, though?
Then over-writing the size by a few gigs, reading the entire disk, and writing it again - as I put in my example - should work. In any case
blkdiscard
is not guaranteed to zero data unless the disk specifically supports that capability, and data can be forensically extracted from ablkdiscarded
disk.The Arch wiki says
blkdiscard -z
is equivalent to runningdd if=/dev/zero
.I don’t see how attempting to over-write would help. The additional blocks are not addressable on the OS side.
dd
will exit because it reached the end of the visible device space but blocks will remain untouched internally.The Arch wiki says
blkdiscard -z
is equivalent to runningdd if=/dev/zero
.Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked “in most cases”, but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.
Sorry, it wasn’t the Arch wiki. It was this page.
I hate using Stack Exchange as a source of truth, but the Arch wiki references this discussion which points out that not all SSDs support “Deterministic read ZEROs after TRIM”, meaning a pure blkdiscard is not guaranteed to clear data (unless the device is advertised with that feature), leaving it available for forensics. Which means having to use
--secure
, which is (also) not supported by all devices, which means having to use-z
, which the previous source claims is equivalent todd if=/dev/zero
.So the SSD is hiding extra, inaccessible, cells. How does
blkdiscard
help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells withblkdiscard
? The paper you referenced does not mentionblkdiscard
directly as that’s a Linux-specific command, but other references imply or state it’s just calling TRIM. That same paper, in a footnote below section 3.3, claims TRIM adds no reliable data security.It looks like - especially from that security paper - that the cells are inaccessible and not reliably clearable by any mechanism.
blkdiscard
then adds no security overdd
, and I’d be interested to see whether, with-z
, it’s any faster thandd
since it perforce would have to write zeros to all blocks just the same, rather than just marking them “discarded”.I feel that, unless you know the SDD supports secure trim, or you always use
-z
,dd
is safer, sinceblkdiscard
can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.So the SSD is hiding extra, inaccessible, cells. How does
blkdiscard
help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells withblkdiscard
?The idea is that
blkdiscard
will tell the SSD’s own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?I feel that, unless you know the SDD supports secure trim, or you always use
-z
,dd
is safer, sinceblkdiscard
can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.After reading all of this I would just do both… Each method fails in different ways so their sum might be better than either in isolation.
But the actual solution is to always encrypt all of your storage. Then you don’t have to worry about this mess.
The idea is that blkdiscard will tell the SSD’s own controller to zero out everything
Just to be clear,
blkdiscard
alone does not zero out anything; it just marks blocks as empty.--secure
tells compatible drives to additionally wipe the blocks;-z
actually zeros out the contents in the blocks likedd
does. The difference is that - without the secure or z options - the data is still in the cells.always encrypt all of your storage
Yes! Although, I don’t think hindsight is helpful for OP.
This is the way. Use urandom though. Then after that you can just blkdiskard to wipe. I would add sync between the commands.
/dev/random
, seriously? This will take ages and have no advantages over/dev/zero
. Even when you really need to fill your drive with random data, use/dev/urandom
, there’s a chance that this will finish in couple days at least. And no, there’s no guarantee that it will wipe all blocks because there are reserved blocks that only device firmware can access and rotate. Some data on rotated blocks still can be accessible for forensic analysis if you care about this.I think most modern distros use urandom for random too. These days, the PRNG is strong enough it doesn’t need to stop and wait for more entropy.
Would ‘overwrite with zeroes’ in gnome Disks work?
No, ssds have a ton of wear leveling where data is shifted around and not deleted. Deleting data wears out the SSD, so it is held as much as possible with the controller. SSDs are like 10% bigger than advertised just to prolong the life.
Even if you write the whole thing with random data then zeros, it will still have blocks in unaccessible (to normal users) places that contain old data.
Always best to use disk encryption or keep any sensitive data in filesystem encryption like plasma vaults or fscrypt.
That’s good to know.
All of my own drives are encrypted except for a USB stick that I use for transferring files to a windows machine.