So I have this external 2.5" drive salvaged from an old laptop of mine. I was trying to use it to backup/store data but the transfer to the drive fails repeatedly at the ~290GB mark leading me to believe that maybe there is a bad sector on the drive. I tried to inspect the drive using smartmontools and smartctl but since it is an external drive, i was not allowed to do so. Is there anyway for me to inspect and fix this drive? I am on fedora ublue-main. The HDD is a 1TB seagate drive.
Edit : I am a linux noob so some hand holding will be appreciated. Also i am looking to use this drive only for low priority media files which i dont mind losing so please help even though it is not the greatest idea to use a failing drive
Edit 2 : It seems my post is not clear of what i am doing. I dont want to recover data from the drive. I want to try to use more of the drive for storing data
I recommend to throw away this drive because blocks that are readable and writeable now, may fail soon. But if you want to use it anyway, it is possible to collect a list of unaccessible blocks usong
badblocks
and pass it tomkfs
to create a filesystem that ignores that blocks. IIRC this is described inman badblocks
.That’s not looking good, usually on a bad sector the drive will write it to a spare sector transparently and mark it as bad internally. That means they’ve probably all already been used up.
smartctl should work just fine over USB, unless your USB adapter for the drive is really bad. Make sure you’re using sudo as well. Worst comes to worst, try using it in a different computer.
Your next goal would be to get it to do a full self test with smartctl. A low level format might help clear some bad state and it might be okay afterwards with a fresh format accounting for whatever defect it built up over time.
I wouldn’t recommend it. It might work for a bit and then just die completely.
Use ddrescue to copy to a working disk, if I remember it will try a number of times and eventually skip the broken sectors so that at least you have a working filesystem on the copy.
ddrescue to the rescue! This is the best advice to get the data out. Don’t muck about too much because more things can fail. Use ddrescue to rescue the data and write a disk image somewhere else. Then make another copy of this one and try to do the filesystem rescue magic on that copy. Really make sure the bad disk marked as unusable.
I think there is misunderstanding because of my phrasing. i dont want to recover data from the drive. Instead i want to repair the drive to use for low priority external storage.
Too far gone at this point. It sucks, but every drive has an end of life. After that point, no matter how many hoops you jump through, they’re read only until they quit entirely. I think that’s where you are now, so even if there’s something in the thread that works to get it back to being able to write to it, you should consider what you get on there as read only, and maybe that only is “only once”.
That’s not how failing hardware works. Recycle and use another piece of non failing hardware.
You can’t my dude
You should be able to use smartctl on a USB drive. I’ve never had an issue anyway. You may need to specify the transport type tho. I had a drive that it couldn’t figure out on its own, but since it was an sata drive in an external enclosure, atapi is the transport protocol to use
sudo smartctl -a -d ata /dev/<devid>
Using the same switch you can run a long test. It’s sort of a pain as it will kill the test on finding a bad sector. But you can take that sector number and plug it into hdparm to rewrite the sector hoping it will remap it. You won’t be able to recover the data in a bad sector, But There are these extra sectors on the drive that firmware can replace the bad one with. It does this on a forced write command.Something along the lines of
hdparm --repair-sector --yes-i-know-what-i-am-doing </dev/<driveid> <sector number from smartctl>
Again, you have data loss, you can’t go back to no loss. All you can do is rescue anything important. You may (probably) need to run a long smartctl test again, and fix another sector. I have saved data off of drives with 100+ bad sectors this way… It’s tedious and eventually I scripted it but it does work.
Older/shittier enclosures don’t support smart pass through. But this is an issue that’s largely been fixed for 10+ years at this point so I’m surprised op has this.
The funny thing is that the cheaper throwaway enclosure i bought supports smart pass through but the newer sata 3 capable connector does not. I am as surprised as you that this turned out to be the issue
thanks for the help but i am unfortunately getting a
Read Device Identity failed: scsi error unsupported field in scsi command
error for that particular drive. A different external drive also in an enclosure returns the appropriate information. I used thesmartctl --scan
command to find out the device types in both cases (both aresat
)Can you plug the drive in directly and test it? You might also just have a dead drive. Either way if you were planning on using it as a backup medium I would tell you it’s probably not a good idea. If you are trying to recover data from it, good luck. Is it making any sound? You could try buying the same, old but good hard drive and swapping the control board on it. You may also have to swap the nvram chip on it to make sure you have the same sector mappings. Either way there is a lot of stuff you can try, but hopefully this is an educational experience for you (as in learning how to recover a dead drive, not as in learning about the need for proper backup methods) as opposed to a desperate attempt to recover data that is most likely unrecoverable.
If you can’t check smart data over usb, plug it up to something internal.
Use the command badblocks -o sus_blocks.txt /dev/your_drive to make a file of your bad blocks. Be 100% sure you’re running bad blocks on the correct drive. Then partition with fdisk or whatever and use mkfs.ext4 -l sus_blocks.txt /dev/your_device to make a file system on there that knows about the bad blocks you found.
Make 100% sure you’re doing those operations on the target drive.
I checked that this still works using a drive with bad blocks last night. I did not check if mkfs.exfat supports that list though.
You can’t fix a bad drive. Well you can, but it requires a few million dollars
I would get a new drive. I’m sorry but it is a lost cause. I would recommend taking it apart before you throw it away as it looks pretty cool inside.
IF you needed the storage and badly, then I remember Hiren’s BootCD used to come with a tool to scan for and quarantine bad sectors. However, this is just a bandaid on top of an infected wound.
The wound will keep spreading, eating up precious backup files. I’ve only ever used quarantining once on my mother in law’s laptop because she had to wait weeks to get a new drive, due to the Philippines flooding back then.
Also, this was an old copy of BootCD that ran through terminal prompt, not a built in Windows PE, and I believe the tool I used has been removed. However, it seems to be replaced with a few alternatives.
What error do you get in the system log when the transfer fails?
How do i search for the relevant log output??
If you’re on systemd the command journalctl will barf up everything. Journalctl |grep “/dev/your_device” | more will look in that mess for any line with your device in it and send it to the more text parser so you can scroll around in it and see stuff. q to get out.
You ought to be able to use those tools on an external drive.
You can’t “fix” bad sectors. A long time ago you could run badblocks on the drive, pipe the output to a file a d feed that file to your mkfs to map around those blocks. Idk if that still works. If you do it on a drive with data it’ll destroy the data I think.
You can look at your logs to see what’s failing at 290.
deleted by creator
Tell the drive to do a secure erase. If there are still bad blocks after that, it is absolutely garbage
Frankly you should never see bad blocks, but sometimes minor bad things happen and the drive has to tell you that this data is gone forever. If you write over those bad blocks at some point, the drive is supposed to remap them to spare blocks and carry on as if everything is okay. If it has run out of spare blocks, then the bad blocks stay forever. A secure erase might give the drive more wiggle room to re-allocate around a larger bad spot, IDK.
bad block are not fix able. however you can create a bacd block map to make your fs skip the bad block. if you have data on your disk currently, i would suggest to use ddrescue to dump a image of your disk, and recover file from it