And additionnaly, isn’t there a way to exploit this so we can store more stuff on PCs?

Edit: can’t thank you all individually but thanks to everyone, I learnt something today, appreciate all of your replies!

  • zeppo@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    17 days ago

    Because of how filesystems work. There’s basically an index that tells the OS what files are stored where on the disk. The quickest way of deletion simply removes the entry in that table. The data is still there, though. So a data recovery program would read the entire disk and try to rebuild the file allocation table or whatever by detecting the beginning and ends of files. This worked better on mechanical drives than SSDs.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      arrow-up
      21
      ·
      17 days ago

      Yup, and many security suites will include a tool that writes all 0s or garbage to those sectors so the data can’t be recovered as easily (you really need multiple passes for it to be gone for good).

      • zeppo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 days ago

        right, i’m super out of date but you;d want to do shred or some dd dev/random > device thing to securely erase them.

  • CameronDev@programming.dev
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    17 days ago

    You have a notebook. On the first page, you put a table of contents. As you fill in pages, you note them down in the table of contents at the start.

    When you want to delete a page, instead of erasing the whole page now (there are hundreds free still, why waste the effort), you erase the entry in the table of contents.

    Now if someone finds your notebook, according to the table of contents there is no file at page X. But if they were to look through every single page, they would be able to find the page eventually.

    This is loosely how file systems work. You can’t really use it to boost storage, the number of pages is finite, and if you need to write a new page, anything not listed in the contents is fair game to be overwritten.

  • AstralPath@lemmy.ca
    link
    fedilink
    arrow-up
    23
    ·
    17 days ago

    If you remember the VCR days, imagine your hard drive is a copy of Bambi. You, in preparation for a family event need a tape to store footage of the event on. You decided that you haven’t watched or wanted to watch Bambi in a long time so you designate that tape as the one you’re gonna use when the party day comes.

    At this point your hard drive (the copy of Bambi) has been designated as useable space for new data to be written in the future.

    Bambi is not lost yet and wont be until you write to that tape, therefore if you wanted to you could watch Bambi in the time between now and the party even though you plan to overwrite it. Once Bambi is overwritten, its no longer recoverable but the interim between now when you designate it as useable space and when the space is used, the data persists.

  • lurch (he/him)@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    17 days ago

    it’s inefficient to really erase the data, so what happens usually is: it gets marked as deleted. the data only gets overwritten when another file is written in the same data area, which often doesn’t happen immediately. even if a drive gets formatted the empty metadata structures of the new partitions and file systems are just written on top. since they have no file entries yet, the previous data just sits there invisible and inaccessible until new files are created and maybe overwrite a bit of the old data.

  • pixeltree
    link
    fedilink
    arrow-up
    11
    ·
    17 days ago

    If I tell you all the boxes in a warehouse are empty, that doesn’t mean they are. It just means I think they are. You can go and check them manually to see if they’re actually empty or if I was lying or forgot there was stuff in them. The metaphor breaks down a little bit here but if you look at the boxes closely, the ones with dust on top were probably empty for a long time and the ones without were probably emptied recently.

  • _____@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    17 days ago

    It’s because hard drives don’t turn every written bit into a 0. Instead it tells the operating system that the region you deleted is free for writing again.

    At some point in the future through usage that region will either be corrupted or have something completely different in it (from our perspective though it may read as corrupt it will still work as expected when written into)

  • Transient Punk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 days ago

    Often times when you delete something off a computer, the computer simply deletes the address of the data, but doesn’t overwrite the data.

    Think of a map for a city. If you delete a house off the map, you may not be able to find it anymore, but the house is still there. It’s the same for computer storage

  • hddsx@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    17 days ago

    No, there is no way to store more stuff on PCs.

    Hard drives are devices that store 1’s and 0’s. There’s a bit more complication, but the short answer is that you can wipe a file system, but the files are still there.

  • Snot Flickerman
    link
    fedilink
    English
    arrow-up
    6
    ·
    17 days ago

    The only way to truly securely delete data is disc destruction. Remove the drive and drill through the hard disk platter or the SSD memory chips.

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 days ago

      Even a single overwrite process is sufficient to stop most attempts at recovery- the only people who might be able to reconstruct that data are… like top FBI forensic labs, and similar.

      Even then, most of the data would be coming back corrupted and mostly useless.

      2 or 3 overwrites are sufficient to prevent that as well.

      For SSD’s, a single overwrite renders it impossible, simply based on how the data is physically stored- there’s no residual “footprint” or “ghost”- the NAND flash memory used floating-gate transistors to store the data. Either the gate is flipped or it’s not, there’s no way to know if it was previously flipped, only what its current state is.

      Physical destruction is usually only recommended for extreme cases, where that drive held extremely sensitive data- where the consequences of any amount of that data being recovered would be catastrophic, even then the process begins with overwriting data. (Also keep in mind just breaking the platers aren’t enough- they have to be shattered into ittybitties.)

  • orcrist@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    17 days ago

    Generally speaking, writing new data is what actually erases old data. So no, you can’t exploit it for extra storage space.

  • TESTNET@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    17 days ago

    Because as long as it isn’t overwritten it can sometimes reside in a residual way in the storage sectors on the drive, these hdd scanning software’s check through the sectors for data hiding in them some successfully some not as successfully, therefore some will find more or less data than others do as well.

    This is why data disappears on drives as well when a physical issue causes the sectors of the drive to begin to stop working aka “bad sectors” this makes the data start to seemingly magically vanish or corrupt if it’s still operating and booting into Windows you can at times witness the data/folders and or files present in folders one moment and missing from the OS the next, that’s an indictator often of an imminent drive failure due to bad sectors In this scenario it get’s less likely you’ll recover the data the longer the drive is in use because more of the sectors will probably die. You want to be doing the recovery and not using the drive in Windows in this instance. I say Windows but it applies to any HDD with any OS installed really.

  • TheBananaKing@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    17 days ago

    A file comes in two parts: the actual blocks of data that hold the file itself, and a directory entry with the name of the file, and the location of the first block.

    When you delete a file, it only scrubs out the directory entry, and re-lists the data blocks as available for use.

  • InFerNo@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    17 days ago

    On ext4 drives 5% is reserved for the system in emergencies. Since disks are getting larger over the year, 5% is a pretty big chunk. It’s possible to tell the system to use a lower reserve. It’s the only instance I know where you can seemingly gain more storage out of thin air. I’ve used it in moments of emergencies when a servers’ disk was too full to function.

  • 1rre@discuss.tchncs.de
    link
    fedilink
    arrow-up
    3
    ·
    17 days ago

    Storage forensics can look into variations in charge to suggest “this used to be a 1” or “this used to be a 0”

    To store more data that way, it’d have to be analog data in reality, as otherwise data loss due to charge decay would be immense so you’d need so much error checking you’d lose most of the storage savings

  • dodgy_bagel
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Follow up question: If I reformat and write my drive with 0s, how reliable are the mechanisms to recover previously stored data on:

    • An hdd
    • A ssd

    Asking as a hypothetical for a hypothetical friend.

    • TheBananaKing@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      17 days ago

      If you actually fill the drive with zeroes, the chances of anyone getting anything back are somewhere between fuck and all.

      Old MFM drives (tech likely as old as your parents) had a theoretical exploit for recovering erased data.

      With modern tech, that loophole was firmly closed; even state-level actors would be shit outta luck.

    • pixeltree
      link
      fedilink
      arrow-up
      2
      ·
      17 days ago

      dban is kind of the standard for wiping data, which iirc is 3 cycles of overwriting everything with 1s, then 0s.