So I finally got most of the data off the failing drive, and I backed it up to a second new drive. But when I attempt to fix the filesystem I get this:
[root@localhost-live home]
# e2fsck /dev/sdd
e2fsck 1.45.6 (20-Mar-2020)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks…
e2fsck: Bad magic number in super-block while trying to open /dev/sdd
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
Found a dos partition table in /dev/sdd
**********************************************
Are those the actual superblocks it wants me to try, or just potential examples?
[Update a while later]
OK, when I run it on the partition, instead of the drive, it says the file system is clean.
New problem: When I mount it to /home (where it normally lives on my system), I see nothing in it except lost+found. That’s not encouraging.
[Tuesday-morning update]
For anyone curious, who wants to go through the entrails, I’ve posted the entire session in comments. It remains a mystery to me why none of the drives seem to have data, or how I could have done anything to my source drive that I was trying to rescue.
[Update a few minutes later]
Yes, I clearly screwed the pooch. I accidentally formatted the drive I was trying to rescue.
Those are potential examples, Rand.
Well, neither of them worked. How would I know what to actually try?
Found a dos partition table? How many partitions on this disk? If there is only one -> Go back into fdisk and change the partition type to 83 (Linux) be sure to write the result and then try e2fsck again only this time include the partition number i.e. /dev/sdd1 if that is what you have.
OK, that was the problem. It says it’s clean when I run it on sdd1.
Congrats!
The problem is, when I mount it to /home, ls shows nothing except lost+found…
Here is the output from e2fsck:
[root@localhost-live home]# e2fsck /dev/sdd1
e2fsck 1.45.6 (20-Mar-2020)
/dev/sdd1: recovering journal
/dev/sdd1: clean, 11/122101760 files, 7947223/488378390 blocks
So why does ls show nothing when I mount it?
That is strange. What does $ df /dev/sdd1 say? Does it roughly match what a df of the broken drive says
[root@localhost-live ~]# df /dev/sdb1
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16380800 0 16380800 0% /dev
[root@localhost-live ~]# df /dev/sdd1
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16380800 0 16380800 0% /dev
This may sound stupid but are you sure you are using the correct disk? Now that you have TWO new ones. Sounds like an empty disk. df should tell what’s up.
The original drive is sdb1. The first replacement is sdd1. The second is sde1.
df output is identical for all three:
[root@localhost-live ~]# df /dev/sde1
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16380800 0 16380800 0% /dev
Egads that’s an empty disk!! Worse that output makes no sense. It should say in the Filesystem column /dev/sdd1 and the Mounted On column should be /home. What you pasted makes no sense. Looks like an empty disk. ???
This is getting complicated. Wish I could see your screen. Try unmounting everything and mount only the bad drive, make sure it is mounted on /home and try the df again. It had better not show up at 0% Used or it looks like it got reformatted. Did you bungle a mke2fs command? When you poke around with the bad drive with ls what do you see?
I see nothing except lost+found. I didn’t do anything with it other than use it as the source drive for ddrescue. It’s kind of terrifying.
That is really bad news. It looks like your source drive got reformatted. Cripes, I thought about recommending you pull the write enable jumper on the drive, now I wish I had. Of course that will vary by vendor. Some may not even offer it as an option and you need the tech manual on the drive to find it. Not sure I can help you past the fact that you now have three clean drives. I suspect a possibility you accidentally swapped the drive order if you issued multiple ddrescue commands. Check your command line history for ddrescue commands issued.
Here is the command history:
[root@localhost-live home]# ddrescue -f -n /dev/sdb /dev/sdd /root/recovery.log
GNU ddrescue 1.25
Press Ctrl-C to interrupt
ipos: 1772 GB, non-trimmed: 57184 kB, current rate: 180 kB/s
ipos: 1986 GB, non-trimmed: 0 B, current rate: 40448 B/s
opos: 1986 GB, non-scraped: 16249 kB, average rate: 18850 kB/s
non-tried: 0 B, bad-sector: 1029 kB, error rate: 0 B/s
rescued: 2000 GB, bad areas: 2010, run time: 1d 5h 28m
pct rescued: 99.99%, read errors: 3885, remaining time: 35m
time since last successful read: n/a
Finished
[root@localhost-live home]# ddrescue -f -n /dev/sdd /dev/sde /root/recovery2.logGNU ddrescue 1.25
Press Ctrl-C to interrupt
ipos: 2000 GB, non-trimmed: 0 B, current rate: 75522 kB/s
opos: 2000 GB, non-scraped: 0 B, average rate: 93302 kB/s
non-tried: 0 B, bad-sector: 0 B, error rate: 0 B/s
rescued: 2000 GB, bad areas: 0, run time: 5h 57m 19s
pct rescued: 100.00%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Finished
I never touched sdb.
No reboots between issuance of ddrescue commands?
No.
If e2fsck still fails: If you went with default parameters when you did mke2fs -t ext4 (you made an ext4 fs on the bad disk correct? If not, use whichever fs you created on the original bad disk. But if not ext4 why not? You should be using ext4!) If you didn’t use defaults you’ll have to remember all the arguments you passed to mke2fs when you built the bad disks fs. Otherwise to find alternative super blocks use this command on your partition to find them: mke2fs -t ext4 -n /dev/sdd1 that is a LOWERCASE n as a switch. Assuming the fs resides in partition 1. If it asks if you want to initialize a fs SAY NO, it missed the -n switch. You can use one of these listed alternate super block numbers on e2fsck.
Do you have ‘df’ aliased in your shell?
Try this: /bin/df /dev/sdb1 and /bin/df /dev/sdd1 and let me know what you see….
It is a REALLY BAD idea to alias Linux commands. Use a prefix like mydf or myls when you do that.
Also, should not let you mount disks on the same mount point. Try something like /home1 and /home2 if you want them mounted simultaneously.
I’ve never mounted more than one at a time.
I’m sure I’ve never aliased a command, and I can’t imagine why it would be aliased on this Fedora live stick.
But this does not look good:
[root@localhost-live ~]# /bin/df /dev/sdb1
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 1921802520 77852 1824032608 1% /mnt/home
Nope it sure doesn’t. And that looks like the correct output from df. Yes your ‘df’ is aliased to something weird on the stick: do an # alias df to see if it says anything. The /bin/df /dev/sdb1 is giving you the correct info. Sorry. But my heart sank when you told me ‘ls’ gave you only lost+found. Sorry I didn’t recommend the write-enable jumper removal right up front.
[root@localhost-live ~]# alias df
-bash: alias: df: not found
Hm. Maybe it’s a path thing. I use Ubuntu not Fedora and it works as I expect. Your /bin/df is doing the expected thing for me.
Fortunately, it’s not the end of the world. I did a backup of my documents to my notebook in October before a trip. But I’m afraid I’ve lost my mail.
Good, copy to a USB stick and then copy back on this system. Better than nothing. If you kept copies of your email on an IMAP server they might still be there. Good luck.
I’ll just copy over the network. My recent (as in the past couple years) of email is on the server, but I’d archived older stuff locally. So unless I can somehow recover, it’s like having amnesia.
If you are using bash as a shell on the stick:
$ echo $SHELL
/bin/bash
You can grep through your stick’s ~/.bash_history file for all occurrences of /dev/sdb1 to see if you can find an offending command. Otherwise just say $ history > history.file and grep that result. Forensics after the death of a drive. You may or may not find the offender.
I’ll look, but it’s such a mystery. I was hypercautious to not touch that drive with anything that would write to it.
or better yet grep for /dev/sdb leave out the 1, this substring will pick those up anyway and also any command that operated directly on the device outside the file system.
You might want to look at dfsee (dfsee.com). It saved my butt one time when I had mangled a disk with OS/2 partitions (hpfs and jfs). The program saved me and recovered all of the data. This program is not for the faint hearted, but it works on all operating systems.
On second thought, ddrescue or safecopy might be easier to use. Dfsee is powerful but it’s not free and it’s scary to use. I used it because it was one of the few programs that supported hpfs and jfs.
When you were initializing the newly-purchased empty drives with new filesystems, did mke2fs ever warn that you were about to overwrite an existing ext4 filesystem (as mentioned by David Spain above in a different context)? If not, then it’s possible you specified the failing disk as the raw-device destination for one of the ddrescue runs. It’s also possible there’s still some naming uncertainty among the various drives.
The /dev/sd{a,b,c} device nodes can get quite confusing, especially after hot-swapping the drives (not sure if you did any hot swaps). It can be helpful to use the /dev/disk/by-id/ata-* symbolic links to make absolutely certain you’re referring to the intended disk. Might want to go through all three disks, mounting by the /dev/disk/by-id/ partitions, and see if there’s any nonempty filesystem.
Something is strange here. You had one ddrescue run that was clearly reading from the failing disk, given the 1029 kB of bad sectors reported. So now there are two copies of the data. Even if you nuked one of the copies later with an errant ddrescue, you should still have one copy left.
This assumes the failing disk wasn’t overwritten by a dd command or a mke2fs command prior to the ddrescue runs.
Totally agree. There should be at least *two* disks with the data on it given the successful ddrescue run. Unless somehow the source disk /dev/sdb1 in this case got accidentally reformatted before the first ddrescue run. Seems unlikely, but to double check that is why I asked Rand to check his history. I don’t have an explanation for why /dev/sdb1 appears empty otherwise. There is a remote possibility that the drives reconfigured to different /dev/sdX points but only if there were reboots in between the time the source drives was identified and a subsequent disk was installed. But Rand said no reboots, so it’s unlikely the configuration changed.
I think some of your df commands were returning nonsense results.
If “df /dev/sdb1” gives this result:
devtmpfs 16380800 0 16380800 0% /dev
then that does not say anything about /dev/sdb1, which probably doesn’t have anything mounted on it. Note the first field, “devtmpfs”. That’s the filesystem type of the /dev directory itself. With an actual mounted filesystem, the first field would be “/dev/sdb1”, and the last field would be “/mnt” or wherever it’s mounted.
Try “df” with no arguments. That should list every mounted filesystem. If the various 1.8 TB filesystems aren’t listed, then “mount -o ro /dev/disk/by-id/ata-xyz /mnt”, then “df” again.
I think his Fedora recovery stick is picking something weird for df. Possibly in another path. /bin/df is doing the right thing for him.
Just for clarity, “mount -o ro /dev/disk/by-id/ata-xyz-part1 /mnt”, since you probably want to mount a partition, not the entire drive. Also, of course, nothing should be mounted on /mnt before issuing these trial mounts.
One more idiotic thought. Were any of these drives set up with lvm? I’ve had a horrid time cloning lvm drives because the lv manager gets in the way. If not using lvm, then please ignore.
The drive I was trying to rescue was lvm.
Yikes. That may be a problem here. The original lsblk that Rand had in the other post made me think lvm wasn’t an issue, but looks like I may have been wrong. I’m not a user of Fedora lvm, sorry.
As a quick sanity check, can you remove the two new drives and then reboot and see if your old disk snaps back to it old self?
Thanks for all the advice, and I’ll try all these ideas in the morning…
Here is the history:
1 parted /dev/sde -a opt mkpart primary 2048s 2G
2 lsblk
3 mount /dev/sde1 /mnt
4 mkfs.ext4 -L BackupDrive /dev/sde1
5 mount /dev/sde1 /mnt
6 cd /mnt
7 ls
8 mkdir image
9 ls
10 man ddrescue
11 dnf install ddrescue
12 ddrescue -d /dev/sdb1 /mnt/image/test.img /mnt/image/test.logfile
13 parted /dev/sde -a opt mkpart primary 2048s 1.9T
14 parted
15 umount /mnt/image
16 ls /mnt
17 umount /dev/sde1
18 parted
19 lsblk
20 umount /mnt
21 parted
22 lsblk -f
23 parted “print free”
24 parted
25 umount /dev/sdb1
26 parted
27 mount /dev/sde1 /mnt
28 umount /dev/sde1
29 cd ..
30 umount /dev/sde1
31 parted
32 mount /dev/mnt
33 mount /dev/sde1
34 mount /dev/sde1 /mnt
35 ddrescue -d /dev/sdb1 /mnt/image/test.img /mnt/image/test.logfile
36 ddrescue -d /dev/sdb1 /mnt/test.img /mnt/test.logfile
37 lsblk
38 umount /mnt
39 lsblk
40 mount /dev/sde1 /mnt
41 ddrescue -d /dev/sdb1 /mnt/test.img /mnt/test.logfile
42 umount /mnt
43 mkfs.ext4 /dev/sdb1
44 mkfs.ext4 /dev/sde1
45 mount /dev/sde1 /mnt
46 ddrescue -d /dev/sdb1 /mnt/test.img /mnt/test.logfile
47 lsblk
48 ls /mnt
49 dd if=/mnt/test.img of=/dev/sdd1
50 mkdir /mnt/home
51 umount /mnt
52 mkdir /mnt/home
53 cd /mnt/home
54 ls
55 mount /dev/sdd1 /mnt/home
56 ls /mnt/home
57 ls /mnt/home/lost+found/
58 umount /mnt/home
59 mount /dev/sdb1 /mnt/home
60 ls /mnt/home
61 ddrescue -f -n /dev/sdb /dev/sdd /root/recovery.log
62 adastA
63 ddrescue -f -n /dev/sdd /dev/sde /root/recovery2.log
64 e2fsck /dev/sdd
65 e2fsck -b 8193 /dev/sdd
66 e2fsck -b 32768 /dev/sdd
67 fsck /dev/sdd
68 umount /dev/sdb1
69 mount /dev/sdb1 /home
70 ls /home
71 ls /home/lost+found/
72 lsblk
73 fdisk /dev/sdd
74 e2fsck /dev/sdd1
75 umount /dev/sdb1
76 mount /dev/sdd1
77 mount /dev/sdd1 /home
78 ls /home
79 cd
80 umount /dev/sdd1
81 mount /dev/sdd1 /home
82 ls /home
83 umount /dev/sdd1
84 ls /home
85 df /dev/sdb1
86 df /dev/sdd1
87 df /dev/sde1
88 mkdir /mnt/home
89 mount /dev/sdb1 /mnt/home
90 ls /mnt/home
91 /bin/df /dev/sdb1
92 alias df
93 df
94 history > /root/history.file
It doesn’t show what I did with parted, but I’m sure I wouldn’t have selected sdb.
What I found was that if two disks had the same LVM ID, they could not coexist on the system. So I had to remove one, boot to a live usb, copy to a different disk, swap disks and then do the final copying. Supposedly there are LVM utilities that allow one to do magical things, but I could never get them to do anything useful, so I’ve moved away from LVM and life is a bit easier. The funny thing is, with OS/2 and ECS, LVM is great. In Linux, my experiences is “not so much”. But I’m no expert.
Here is the entire session, including what I did with parted:
[liveuser@localhost-live ~]$ ls /dev/sd*
/dev/sda /dev/sda2 /dev/sdb /dev/sdc /dev/sdd1 /dev/sdf /dev/sdf2
/dev/sda1 /dev/sda3 /dev/sdb1 /dev/sdd /dev/sde /dev/sdf1 /dev/sdf3
[liveuser@localhost-live ~]$ lsblk -o name,label,size,fstype,model
NAME LABEL SIZE FSTYPE MODEL
loop0 1.8G squashfs
loop1 Anaconda 7.5G ext4
├─live-rw Anaconda 7.5G ext4
└─live-base Anaconda 7.5G ext4
loop2 32G
└─live-rw Anaconda 7.5G ext4
sda 232.9G Samsung_SSD_850_EVO_250GB
├─sda1 600M vfat
├─sda2 1G ext4
└─sda3 230G LVM2_member
├─fedora_localhost–live-home00
│ 10G ext4
└─fedora_localhost–live-root00
220G ext4
sdb 1.8T WDC_WD20EARX-00PASB0
└─sdb1 1.8T ext4
sdc CCCOMA_X64FRE_EN-US_DV9 55.9G udf Patriot_Blaze
sdd 1.8T WDC_WD20EZAZ-00L9GB0
└─sdd1 1.8T ext4
sde 1.8T WDC_WD20EZAZ-00L9GB0
sdf Fedora-WS-Live-33-1-2 14.9G iso9660 USB_Flash_Drive
├─sdf1 Fedora-WS-Live-33-1-2 1.9G iso9660
├─sdf2 ANACONDA 10.9M vfat
└─sdf3 ANACONDA 22.9M hfsplus
zram0 4G
[liveuser@localhost-live ~]$ parted /dev/sde –align opt mklabel gpt 0 4G
WARNING: You are not superuser. Watch out for permissions.
Error: Error opening /dev/sde: Permission denied
Retry/Cancel? C
[liveuser@localhost-live ~]$ sudo parted /dev/sde –align opt mklabel gpt 0 4G
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
Usage: parted [OPTION]… [DEVICE [COMMAND [PARAMETERS]…]…]
Apply COMMANDs with PARAMETERS to DEVICE. If no COMMAND(s) are given, run in
interactive mode.
OPTIONs:
-h, –help displays this help message
-l, –list lists partition layout on all block devices
-m, –machine displays machine parseable output
-s, –script never prompts for user intervention
-v, –version displays the version
-a, –align=[none|cyl|min|opt] alignment for new partitions
COMMANDs:
align-check TYPE N check partition N for TYPE(min|opt)
alignment
help [COMMAND] print general help, or help on
COMMAND
mklabel,mktable LABEL-TYPE create a new disklabel (partition
table)
mkpart PART-TYPE [FS-TYPE] START END make a partition
name NUMBER NAME name partition NUMBER as NAME
print [devices|free|list,all|NUMBER] display the partition table,
available devices, free space, all found partitions, or a particular
partition
quit exit program
rescue START END rescue a lost partition near START
and END
resizepart NUMBER END resize partition NUMBER
rm NUMBER delete partition NUMBER
select DEVICE choose the device to edit
disk_set FLAG STATE change the FLAG on selected device
disk_toggle [FLAG] toggle the state of FLAG on selected
device
set NUMBER FLAG STATE change the FLAG on partition NUMBER
toggle [NUMBER [FLAG]] toggle the state of FLAG on partition
NUMBER
unit UNIT set the default unit to UNIT
version display the version number and
copyright information of GNU Parted
Report bugs to bug-parted@gnu.org
[liveuser@localhost-live ~]$ lsblk -o name,label,size,fstype,model
NAME LABEL SIZE FSTYPE MODEL
loop0 1.8G squashfs
loop1 Anaconda 7.5G ext4
├─live-rw Anaconda 7.5G ext4
└─live-base Anaconda 7.5G ext4
loop2 32G
└─live-rw Anaconda 7.5G ext4
sda 232.9G Samsung_SSD_850_EVO_250GB
├─sda1 600M vfat
├─sda2 1G ext4
└─sda3 230G LVM2_member
├─fedora_localhost–live-home00
│ 10G ext4
└─fedora_localhost–live-root00
220G ext4
sdb 1.8T WDC_WD20EARX-00PASB0
└─sdb1 1.8T ext4
sdc CCCOMA_X64FRE_EN-US_DV9 55.9G udf Patriot_Blaze
sdd 1.8T WDC_WD20EZAZ-00L9GB0
└─sdd1 1.8T ext4
sde 1.8T WDC_WD20EZAZ-00L9GB0
sdf Fedora-WS-Live-33-1-2 14.9G iso9660 USB_Flash_Drive
├─sdf1 Fedora-WS-Live-33-1-2 1.9G iso9660
├─sdf2 ANACONDA 10.9M vfat
└─sdf3 ANACONDA 22.9M hfsplus
zram0 4G
[liveuser@localhost-live ~]$ su –
[root@localhost-live ~]# parted /dev/sde -a opt mkpart primary 2048s 2G
Information: You may need to update /etc/fstab.
[root@localhost-live ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1.8G 1 loop /run/media/liveuser/disk
loop1 7:1 0 7.5G 1 loop
├─live-rw 253:0 0 7.5G 0 dm /
└─live-base 253:1 0 7.5G 1 dm
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 7.5G 0 dm /
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 600M 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 230G 0 part
├─fedora_localhost–live-home00
│ 253:2 0 10G 0 lvm
└─fedora_localhost–live-root00
253:3 0 220G 0 lvm
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
sdc 8:32 0 55.9G 0 disk
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.9G 0 part
sdf 8:80 1 14.9G 0 disk
├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live
├─sdf2 8:82 1 10.9M 0 part
└─sdf3 8:83 1 22.9M 0 part
zram0 252:0 0 4G 0 disk [SWAP]
[root@localhost-live ~]# mount /dev/sde1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error.
[root@localhost-live ~]# mkfs.ext4 -L BackupDrive /dev/sde1
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done
Creating filesystem with 487936 4k blocks and 122160 inodes
Filesystem UUID: 36b9211f-3e28-4528-ad90-c2e6523bc5e7
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost-live ~]# mount /dev/sde1 /mnt
[root@localhost-live ~]# cd /mnt
[root@localhost-live mnt]# ls
lost+found
[root@localhost-live mnt]# mkdir image
[root@localhost-live mnt]# ls
image lost+found
[root@localhost-live mnt]# man ddrescue
No manual entry for ddrescue
[root@localhost-live mnt]# dnf install ddrescue
Fedora 33 openh264 (From Cisco) – x86_64 2.0 kB/s | 2.5 kB 00:01
Fedora Modular 33 – x86_64 1.8 MB/s | 3.3 MB 00:01
Fedora Modular 33 – x86_64 – Updates 3.8 MB/s | 3.1 MB 00:00
Fedora 33 – x86_64 – Updates 4.2 MB/s | 24 MB 00:05
Fedora 33 – x86_64 6.2 MB/s | 72 MB 00:11
Dependencies resolved.
================================================================================
Package Architecture Version Repository Size
================================================================================
Installing:
ddrescue x86_64 1.25-2.fc33 fedora 135 k
Transaction Summary
================================================================================
Install 1 Package
Total download size: 135 k
Installed size: 279 k
Is this ok [y/N]: y
Downloading Packages:
ddrescue-1.25-2.fc33.x86_64.rpm 311 kB/s | 135 kB 00:00
——————————————————————————–
Total 154 kB/s | 135 kB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : ddrescue-1.25-2.fc33.x86_64 1/1
Running scriptlet: ddrescue-1.25-2.fc33.x86_64 1/1
Verifying : ddrescue-1.25-2.fc33.x86_64 1/1
Installed:
ddrescue-1.25-2.fc33.x86_64
Complete!
[root@localhost-live mnt]# ddrescue -d /dev/sdb1 /mnt/image/test.img /mnt/image/test.logfile
GNU ddrescue 1.25
Press Ctrl-C to interrupt
ipos: 1910 MB, non-trimmed: 0 B, current rate: 6291 kB/s
opos: 1910 MB, non-scraped: 0 B, average rate: 73496 kB/s
non-tried: 1998 GB, bad-sector: 0 B, error rate: 0 B/s
rescued: 1910 MB, bad areas: 0, run time: 25s
pct rescued: 0.09%, read errors: 0, remaining time: 6h 32m
time since last successful read: n/a
Copying non-tried blocks… Pass 1 (forwards)
ddrescue: Write error: No space left on device
[root@localhost-live mnt]# parted /dev/sde -a opt mkpart primary 2048s 1.9T
Warning: You requested a partition from 1049kB to 1900GB (sectors
2048..3710937500).
The closest location we can manage is 1048kB to 1048kB (sectors 2047..2047).
Is this still acceptable to you?
Yes/No? N
[root@localhost-live mnt]# parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) select /dev/sde
Using /dev/sde
(parted) print
Model: ATA WDC WD20EZAZ-00L (scsi)
Disk /dev/sde: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2000MB 1999MB ext4 primary
(parted) resizepart
Partition number? 1
Warning: Partition /dev/sde1 is being used. Are you sure you want to continue?
Yes/No? N
(parted) q
[root@localhost-live mnt]# umount /mnt/image
umount: /mnt/image: not mounted.
[root@localhost-live mnt]# ls /mnt
image lost+found
[root@localhost-live mnt]# umount /dev/sde1
umount: /mnt: target is busy.
[root@localhost-live mnt]# parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) select /dev/sde1
Using /dev/sde1
(parted) resizepart
Partition number? 1
Warning: Partition /dev/sde1 is being used. Are you sure you want to continue?
Yes/No? Y
End? [1999MB]?
(parted) resizepart
Partition number? 1
Warning: Partition /dev/sde1 is being used. Are you sure you want to continue?
Yes/No? Y
End? [1999MB]? 1.9T
Error: The location 1.9T is outside of the device /dev/sde1.
(parted) resizepart
Partition number? 1
Warning: Partition /dev/sde1 is being used. Are you sure you want to continue?
Yes/No? y
End? [1999MB]? 1.8T
Error: The location 1.8T is outside of the device /dev/sde1.
(parted) rm 1
Warning: Partition /dev/sde1 is being used. Are you sure you want to continue?
Yes/No? y
(parted) print
Model: Unknown (unknown)
Disk /dev/sde1: 1999MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
(parted) mkpart
File system type? [ext2]? ext4
Start? 1
End? 1800000
Error: The location 1800000 is outside of the device /dev/sde1.
(parted) q
Information: You may need to update /etc/fstab.
[root@localhost-live mnt]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1.8G 1 loop /run/media/liveuser/disk
loop1 7:1 0 7.5G 1 loop
├─live-rw 253:0 0 7.5G 0 dm /
└─live-base 253:1 0 7.5G 1 dm
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 7.5G 0 dm /
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 600M 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 230G 0 part
├─fedora_localhost–live-home00
│ 253:2 0 10G 0 lvm
└─fedora_localhost–live-root00
253:3 0 220G 0 lvm
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
sdc 8:32 0 55.9G 0 disk
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.9G 0 part /mnt
sdf 8:80 1 14.9G 0 disk
├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live
├─sdf2 8:82 1 10.9M 0 part
└─sdf3 8:83 1 22.9M 0 part
zram0 252:0 0 4G 0 disk [SWAP]
[root@localhost-live mnt]# umount /mnt
umount: /mnt: target is busy.
[root@localhost-live mnt]# parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) mkpart
Partition name? []? gpt
File system type? [ext2]? ext4
Start? 1
End? 1800000
Error: The location 1800000 is outside of the device /dev/sda.
(parted) mkpart
Partition name? []? gpt
File system type? [ext2]? ext4
Start? 1
End? 1700000
Error: The location 1700000 is outside of the device /dev/sda.
(parted) print
Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sda: 250GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 630MB 629MB fat32 EFI System Partition boot, esp
2 630MB 1704MB 1074MB ext4
3 1704MB 249GB 247GB lvm
(parted) select /dev/sde
Using /dev/sde
(parted) mkpart
Partition name? []? gpt
File system type? [ext2]? ext4
Start? 1
End? 1800000
Warning: You requested a partition from 1000kB to 1800GB (sectors
1953..3515625000).
The closest location we can manage is 1048kB to 1048kB (sectors 2047..2047).
Is this still acceptable to you?
Yes/No? n
(parted) mkpart
Partition name? []? gpt
File system type? [ext2]? ext4
Start? 1.048
End? 1800000
Warning: You requested a partition from 1048kB to 1800GB (sectors
2046..3515625000).
The closest location we can manage is 1048kB to 1048kB (sectors 2047..2047).
Is this still acceptable to you?
Yes/No? ^C
(parted) q
[root@localhost-live mnt]# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
loop0
squash 4.0 0 100% /run/media
loop1
│ ext4 1.0 Anaconda
│ 2b82edc2-4eb2-44a0-8b5b-c71da0de9b3a
├─live-rw
│ ext4 1.0 Anaconda
│ 2b82edc2-4eb2-44a0-8b5b-c71da0de9b3a 1.4G 81% /
└─live-base
ext4 1.0 Anaconda
2b82edc2-4eb2-44a0-8b5b-c71da0de9b3a
loop2
│
└─live-rw
ext4 1.0 Anaconda
2b82edc2-4eb2-44a0-8b5b-c71da0de9b3a 1.4G 81% /
sda
├─sda1
│ vfat FAT32 9339-3D76
├─sda2
│ ext4 1.0 28dda066-3b33-4f6f-8107-70a2d5265330
└─sda3
LVM2_m LVM2 GfItbc-3als-MxdU-qN30-PsZO-3MzH-XomnrT
├─fedora_localhost–live-home00
│ ext4 1.0 5bc7006c-d75f-470e-8ba6-8fc83b1b7db2
└─fedora_localhost–live-root00
ext4 1.0 0a2f6aeb-0d56-4545-acd9-252c18c548e7
sdb
└─sdb1
ext4 1.0 fe109175-a0ef-4375-8c82-81570e7fb880
sdc udf 1.02 CCCOMA_X64FRE_EN-US_DV9
478c00004d532055
sdd
└─sdd1
ext4 1.0 fe109175-a0ef-4375-8c82-81570e7fb880
sde
└─sde1
ext4 1.0 BackupDrive
36b9211f-3e28-4528-ad90-c2e6523bc5e7 0 99% /mnt
sdf iso966 Jolie Fedora-WS-Live-33-1-2
│ 2020-10-20-00-01-33-00
├─sdf1
│ iso966 Jolie Fedora-WS-Live-33-1-2
│ 2020-10-20-00-01-33-00 0 100% /run/initr
├─sdf2
│ vfat FAT16 ANACONDA
│ 8DBC-AAB9
└─sdf3
hfsplu ANACONDA
89a14935-39dd-3c35-bed6-eae9a4d00ffc
zram0
[SWAP]
[root@localhost-live mnt]# parted “print free”
Error: Could not stat device print free – No such file or directory.
Retry/Cancel? parted
parted: invalid token: parted
Retry/Cancel? C
[root@localhost-live mnt]# parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) select /dev/sde
Using /dev/sde
(parted) print free
Model: ATA WDC WD20EZAZ-00L (scsi)
Disk /dev/sde: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 2000MB 1999MB ext4 primary
2000MB 2000GB 1998GB Free Space
(parted) rm
Partition number? 1
Warning: Partition /dev/sde1 is being used. Are you sure you want to continue?
Yes/No? Y
Error: Partition(s) 1 on /dev/sde have been written, but we have been unable to
inform the kernel of the change, probably because it/they are in use. As a
result, the old partition(s) will remain in use. You should reboot now before
making further changes.
Ignore/Cancel? C
(parted) q
[root@localhost-live mnt]# umount /dev/sdb1
umount: /dev/sdb1: not mounted.
[root@localhost-live mnt]# parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) select /dev/sde
Using /dev/sde
(parted) pring
align-check TYPE N check partition N for TYPE(min|opt)
alignment
help [COMMAND] print general help, or help on
COMMAND
mklabel,mktable LABEL-TYPE create a new disklabel (partition
table)
mkpart PART-TYPE [FS-TYPE] START END make a partition
name NUMBER NAME name partition NUMBER as NAME
print [devices|free|list,all|NUMBER] display the partition table,
available devices, free space, all found partitions, or a particular
partition
quit exit program
rescue START END rescue a lost partition near START
and END
resizepart NUMBER END resize partition NUMBER
rm NUMBER delete partition NUMBER
select DEVICE choose the device to edit
disk_set FLAG STATE change the FLAG on selected device
disk_toggle [FLAG] toggle the state of FLAG on selected
device
set NUMBER FLAG STATE change the FLAG on partition NUMBER
toggle [NUMBER [FLAG]] toggle the state of FLAG on partition
NUMBER
unit UNIT set the default unit to UNIT
version display the version number and
copyright information of GNU Parted
(parted) print
Model: ATA WDC WD20EZAZ-00L (scsi)
Disk /dev/sde: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
(parted) mkpart
Partition name? []? gpt
File system type? [ext2]? ext4
Start? 1
End? 1800000
(parted) print
Model: ATA WDC WD20EZAZ-00L (scsi)
Disk /dev/sde: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1800GB 1800GB ext4 gpt
(parted) mklabel BackupDrive
parted: invalid token: BackupDrive
New disk label type?
New disk label type?
New disk label type? q
parted: invalid token: q
New disk label type? gpt
Warning: Partition(s) on /dev/sde are being used.
Ignore/Cancel? C
(parted) print
Model: ATA WDC WD20EZAZ-00L (scsi)
Disk /dev/sde: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1800GB 1800GB ext4 gpt
(parted) q
Information: You may need to update /etc/fstab.
[root@localhost-live mnt]# mount /dev/sde1 /mnt
mount: /mnt: /dev/sde1 already mounted on /mnt.
[root@localhost-live mnt]# umount /dev/sde1
umount: /mnt: target is busy.
[root@localhost-live mnt]# cd ..
[root@localhost-live /]# umount /dev/sde1
[root@localhost-live /]# parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) select /dev/sde1
Using /dev/sde1
(parted) print
Model: Unknown (unknown)
Disk /dev/sde1: 1800GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 1800GB 1800GB ext4
(parted) q
[root@localhost-live /]# mount /dev/mnt
mount: /dev/mnt: can’t find in /etc/fstab.
[root@localhost-live /]# mount /dev/sde1
mount: /dev/sde1: can’t find in /etc/fstab.
[root@localhost-live /]# mount /dev/sde1 /mnt
[root@localhost-live /]# ddrescue -d /dev/sdb1 /mnt/image/test.img /mnt/image/test.logfile
GNU ddrescue 1.25
Press Ctrl-C to interrupt
Initial status (read from mapfile)
rescued: 1910 MB, tried: 0 B, bad-sector: 0 B, bad areas: 0
Current status
ipos: 1910 MB, non-trimmed: 0 B, current rate: 0 B/s
opos: 1910 MB, non-scraped: 0 B, average rate: 0 B/s
non-tried: 1998 GB, bad-sector: 0 B, error rate: 0 B/s
rescued: 1910 MB, bad areas: 0, run time: 0s
pct rescued: 0.09%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Copying non-tried blocks… Pass 1 (forwards)
ddrescue: Write error: No space left on device
[root@localhost-live /]# ddrescue -d /dev/sdb1 /mnt/test.img /mnt/test.logfile
ddrescue: Can’t create mapfile: No space left on device
[root@localhost-live /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1.8G 1 loop /run/media/liveuser/disk
loop1 7:1 0 7.5G 1 loop
├─live-rw 253:0 0 7.5G 0 dm /
└─live-base 253:1 0 7.5G 1 dm
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 7.5G 0 dm /
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 600M 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 230G 0 part
├─fedora_localhost–live-home00
│ 253:2 0 10G 0 lvm
└─fedora_localhost–live-root00
253:3 0 220G 0 lvm
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
sdc 8:32 0 55.9G 0 disk
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.6T 0 part /mnt
sdf 8:80 1 14.9G 0 disk
├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live
├─sdf2 8:82 1 10.9M 0 part
└─sdf3 8:83 1 22.9M 0 part
zram0 252:0 0 4G 0 disk [SWAP]
[root@localhost-live /]# umount /mnt
[root@localhost-live /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1.8G 1 loop /run/media/liveuser/disk
loop1 7:1 0 7.5G 1 loop
├─live-rw 253:0 0 7.5G 0 dm /
└─live-base 253:1 0 7.5G 1 dm
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 7.5G 0 dm /
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 600M 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 230G 0 part
├─fedora_localhost–live-home00
│ 253:2 0 10G 0 lvm
└─fedora_localhost–live-root00
253:3 0 220G 0 lvm
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
sdc 8:32 0 55.9G 0 disk
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.6T 0 part
sdf 8:80 1 14.9G 0 disk
├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live
├─sdf2 8:82 1 10.9M 0 part
└─sdf3 8:83 1 22.9M 0 part
zram0 252:0 0 4G 0 disk [SWAP]
[root@localhost-live /]# mount /dev/sde1 /mnt
[root@localhost-live /]# ddrescue -d /dev/sdb1 /mnt/test.img /mnt/test.logfile
GNU ddrescue 1.25
Press Ctrl-C to interrupt
Initial status (read from mapfile)
rescued: 0 B, tried: 0 B, bad-sector: 0 B, bad areas: 0
Current status
ipos: 0 B, non-trimmed: 0 B, current rate: 0 B/s
opos: 0 B, non-scraped: 0 B, average rate: 0 B/s
non-tried: 2000 GB, bad-sector: 0 B, error rate: 0 B/s
rescued: 0 B, bad areas: 0, run time: 0s
pct rescued: 0.00%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Copying non-tried blocks… Pass 1 (forwards)
ddrescue: Error writing mapfile ‘/mnt/test.logfile’: No space left on device
Fix the problem and press ENTER to retry,
or E+ENTER for an emergency save and exit,
or Q+ENTER to abort.
qqqqqqqqq
ddrescue: Write error: No space left on device
[root@localhost-live /]# umount /mnt
[root@localhost-live /]# mkfs.ext4 /dev/sdb1
mke2fs 1.45.6 (20-Mar-2020)
/dev/sdb1 contains a ext4 file system
last mounted on /home on Thu Mar 4 16:55:53 2021
Proceed anyway? (y,N) y
Creating filesystem with 488378390 4k blocks and 122101760 inodes
Filesystem UUID: 3bdfd69d-fb28-4faf-a51a-c8c324dc32a7
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): mount /dev/sdbdone
Writing superblocks and filesystem accountingdone
[root@localhost-live /]# mkfs.ext4 /dev/sde1
mke2fs 1.45.6 (20-Mar-2020)
/dev/sde1 contains a ext4 file system labelled ‘BackupDrive’
last mounted on /mnt on Fri Mar 5 19:59:21 2021
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 439452928 4k blocks and 109871104 inodes
Filesystem UUID: e71ce7e5-197f-4841-9e20-edc8268bee4a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost-live /]# mount /dev/sde1 /mnt
[root@localhost-live /]# ddrescue -d /dev/sdb1 /mnt/test.img /mnt/test.logfile
GNU ddrescue 1.25
Press Ctrl-C to interrupt
ipos: 1784 GB, non-trimmed: 43778 kB, current rate: 14680 kB/s
opos: 1784 GB, non-scraped: 0 B, average rate: 46776 kB/s
non-tried: 229778 MB, bad-sector: 0 B, error rate: 0 B/s
rescued: 1770 GB, bad areas: 0, run time: 10h 30m 51s
pct rescued: 88.51%, read errors: 668, remaining time: 1h 10m
time since last successful read: n/a
Copying non-tried blocks… Pass 1 (forwards)
ddrescue: Write error: No space left on device
[root@localhost-live /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1.8G 1 loop /run/media/liveuser/disk
loop1 7:1 0 7.5G 1 loop
├─live-rw 253:0 0 7.5G 0 dm /
└─live-base 253:1 0 7.5G 1 dm
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 7.5G 0 dm /
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 600M 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 230G 0 part
├─fedora_localhost–live-home00
│ 253:2 0 10G 0 lvm
└─fedora_localhost–live-root00
253:3 0 220G 0 lvm
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
sdc 8:32 0 55.9G 0 disk
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.6T 0 part /mnt
sdf 8:80 1 14.9G 0 disk
├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live
├─sdf2 8:82 1 10.9M 0 part
└─sdf3 8:83 1 22.9M 0 part
zram0 252:0 0 4G 0 disk [SWAP]
[root@localhost-live /]# ls /mnt
lost+found test.img test.logfile test.logfile.bak
[root@localhost-live /]# dd if=/mnt/test.img of=/dev/sdd1
3484522136+0 records in
3484522136+0 records out
1784075333632 bytes (1.8 TB, 1.6 TiB) copied, 40873.7 s, 43.6 MB/s
[root@localhost-live /]# mkdir /mnt/home
mkdir: cannot create directory ‘/mnt/home’: No space left on device
[root@localhost-live /]# umount /mnt
[root@localhost-live /]# mkdir /mnt/home
[root@localhost-live /]# cd /mnt/home
[root@localhost-live home]# ls
[root@localhost-live home]# mount /dev/sdd1 /mnt/home
[root@localhost-live home]# ls /mnt/home
lost+found
[root@localhost-live home]# ls /mnt/home/lost+found/
[root@localhost-live home]# umount /mnt/home
[root@localhost-live home]# mount /dev/sdb1 /mnt/home
[root@localhost-live home]# ls /mnt/home
lost+found
[root@localhost-live home]# ddrescue -f -n /dev/sdb /dev/sdd /root/recovery.log
GNU ddrescue 1.25
Press Ctrl-C to interrupt
ipos: 1772 GB, non-trimmed: 57184 kB, current rate: 180 kB/s
ipos: 1986 GB, non-trimmed: 0 B, current rate: 40448 B/s
opos: 1986 GB, non-scraped: 16249 kB, average rate: 18850 kB/s
non-tried: 0 B, bad-sector: 1029 kB, error rate: 0 B/s
rescued: 2000 GB, bad areas: 2010, run time: 1d 5h 28m
pct rescued: 99.99%, read errors: 3885, remaining time: 35m
time since last successful read: n/a
Finished
[root@localhost-live home]# ddrescue -f -n /dev/sdd /dev/sde /root/recovery2.logGNU ddrescue 1.25
Press Ctrl-C to interrupt
ipos: 2000 GB, non-trimmed: 0 B, current rate: 75522 kB/s
opos: 2000 GB, non-scraped: 0 B, average rate: 93302 kB/s
non-tried: 0 B, bad-sector: 0 B, error rate: 0 B/s
rescued: 2000 GB, bad areas: 0, run time: 5h 57m 19s
pct rescued: 100.00%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Finished
[root@localhost-live home]# e2fsck /dev/sdd
e2fsck 1.45.6 (20-Mar-2020)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks…
e2fsck: Bad magic number in super-block while trying to open /dev/sdd
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
Found a dos partition table in /dev/sdd
[root@localhost-live home]# e2fsck -b 8193 /dev/sdd
e2fsck 1.45.6 (20-Mar-2020)
e2fsck: Bad magic number in super-block while trying to open /dev/sdd
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
Found a dos partition table in /dev/sdd
[root@localhost-live home]# e2fsck -b 32768 /dev/sdd
e2fsck 1.45.6 (20-Mar-2020)
e2fsck: Bad magic number in super-block while trying to open /dev/sdd
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
Found a dos partition table in /dev/sdd
[root@localhost-live home]# fsck /dev/sdd
fsck from util-linux 2.36
e2fsck 1.45.6 (20-Mar-2020)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks…
fsck.ext2: Bad magic number in super-block while trying to open /dev/sdd
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
Found a dos partition table in /dev/sdd
[root@localhost-live home]# umount /dev/sdb1
[root@localhost-live home]# mount /dev/sdb1 /home
[root@localhost-live home]# ls /home
lost+found
[root@localhost-live home]# ls /home/lost+found/
[root@localhost-live home]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1.8G 1 loop /run/media/liveuser/disk
loop1 7:1 0 7.5G 1 loop
├─live-rw 253:0 0 7.5G 0 dm /
└─live-base 253:1 0 7.5G 1 dm
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 7.5G 0 dm /
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 600M 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 230G 0 part
├─fedora_localhost–live-home00
│ 253:2 0 10G 0 lvm
└─fedora_localhost–live-root00
253:3 0 220G 0 lvm
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part /home
sdc 8:32 0 55.9G 0 disk
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.8T 0 part
sdf 8:80 1 14.9G 0 disk
├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live
├─sdf2 8:82 1 10.9M 0 part
└─sdf3 8:83 1 22.9M 0 part
zram0 252:0 0 4G 0 disk [SWAP]
[root@localhost-live home]# fdisk /dev/sdd
Welcome to fdisk (util-linux 2.36).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): print
Disk /dev/sdd: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EZAZ-00L
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xe4321777
Device Boot Start End Sectors Size Id Type
/dev/sdd1 2048 3907029167 3907027120 1.8T 83 Linux
Command (m for help): q
[root@localhost-live home]# e2fsck /dev/sdd1
e2fsck 1.45.6 (20-Mar-2020)
/dev/sdd1: recovering journal
/dev/sdd1: clean, 11/122101760 files, 7947223/488378390 blocks
[root@localhost-live home]# umount /dev/sdb1
[root@localhost-live home]# mount /dev/sdd1
mount: /dev/sdd1: can’t find in /etc/fstab.
[root@localhost-live home]# mount /dev/sdd1 /home
[root@localhost-live home]# ls /home
lost+found
[root@localhost-live home]# cd
[root@localhost-live ~]# umount /dev/sdd1
[root@localhost-live ~]# mount /dev/sdd1 /home
[root@localhost-live ~]# ls /home
lost+found
[root@localhost-live ~]# umount /dev/sdd1
[root@localhost-live ~]# ls /home
liveuser
[root@localhost-live ~]# df /dev/sdb1
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16380800 0 16380800 0% /dev
[root@localhost-live ~]# df /dev/sdd1
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16380800 0 16380800 0% /dev
[root@localhost-live ~]# df /dev/sde1
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16380800 0 16380800 0% /dev
[root@localhost-live ~]# mkdir /mnt/home
mkdir: cannot create directory ‘/mnt/home’: File exists
[root@localhost-live ~]# mount /dev/sdb1 /mnt/home
[root@localhost-live ~]# ls /mnt/home
lost+found
[root@localhost-live ~]# /bin/df /dev/sdb1
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 1921802520 77852 1824032608 1% /mnt/home
[root@localhost-live ~]# ^C
[root@localhost-live ~]# alias df
-bash: alias: df: not found
[root@localhost-live ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16380800 0 16380800 0% /dev
tmpfs 16446560 407896 16038664 3% /dev/shm
tmpfs 6578628 10096 6568532 1% /run
/dev/sdf1 2001456 2001456 0 100% /run/initramfs/live
/dev/mapper/live-rw 7675112 6456212 1202516 85% /
tmpfs 16446564 20856 16425708 1% /tmp
vartmp 16446560 0 16446560 0% /var/tmp
tmpfs 3289312 932 3288380 1% /run/user/1000
/dev/loop0 1885056 1885056 0 100% /run/media/liveuser/disk
/dev/sdb1 1921802520 77852 1824032608 1% /mnt/home
OMG. What were you doing with command 43? Did you abort? Please say yes….
I don’t know what I was doing. That’s it. It should have been sd1. I’m screwed.
Yes, yes you are. And I feel like I handed you the gun.
Oh, I don’t think so. I was going to do it regardless of anything you did. I can’t believe I did that.
It’s not the end of the world. I have an October backup on my notebook of my Documents folder, and I have an older drive that will have some older mail on it. I’ll just have to resurrect as much as I can.
Is it truly impossible to get data off a reformatted drive?
/dev/sdd1 not that it matters now….
Roman alphabetic with its symmetrical charset is a digital catastrophe only one keystroke away. Lexdistic programming….
Yes, sdd1.
Well if it is, it is outside my wheel house. My understanding is that the reason mkfs takes so long is because it used to scan through all the blocks on the device looking for bad blocks. In the bad old days the drive electronics did not provide error detection / correction, so mkfs was ‘friendly’ enough to write and read each block as it went looking for bad blocks on the drive it could add the the bad block table. Now maybe, just maybe modern drives have error correction built in so mkfs only looks for read errors when formatting. With the super big drives its likely mkfs isn’t walking through all the blocks. So it’s possible with the help of a fs guru you might be able to reconstruct the inode table. Of course you theoretically could rebuild an F1 engine from a Saturn V, that is currently on display, its probably the same level of effort and requires specialized tools. But if mkfs worked like it used to, or for all practical purposes the short answer is no. You might find a Linux fs restoration house that could look at it for some big $ if it’s that critical to you.
IMy unix/linux days are long gone… but imagine this stuff as a ca. 1990 Mac commericial!
For better or worse, Macs are all Linux based as of System X. The desktop is an app on top. Now in 1990 that wasn’t so. In fact in 1993 I worked writing a Mac app on System 6 that used AppleTalk Remote and an ‘Init’ that I also wrote. (init’s patched “Mac kernel” memory and acted like daemons under Linux or Services under Windows and started up at boot time if installed). As you can imagine, debugging an init was… interesting… Scott Knaster had a great series of technical books that described the process. The Apple/Mac equivalent to O’Reilly. Not bad for the little computer that could….
How Linux was viewed back in those days? Well by Novell that was peddling a version before IBM anyway….
https://www.youtube.com/watch?v=GVOnFdMf0RU
Command 43 in your history… User error on device?
I miss the days when all hard drives had write protect jumpers. Seems to have fallen out of favor with modern internal disks. Truly a shame, as we can see, truly.
“Is it truly impossible to get data off a reformatted drive?”
Your best bet at this point is probably to contact a data recovery service.
I did find this really old thread that suggests you might be able to recover at least some files.
https://ubuntuforums.org/showthread.php?t=2230994
Yes, I see there’s software that does it, but so far I only see it for Windows and Mac, not an ext4 LVM. All the data is in theory still on the drive, because I haven’t written to it since I started to try to recover it. All I did was format it (sigh)… There should be software that can recover the pointers.
Try a data recovery service. I’m sure the FBI has some interesting tools as well….
Hah. Commercial recovery is a thing now, and has been for a long time. Since Rand had the presence of mind to stop poking at the drive once he realized what he’d done, he probably has the best possible chance of recovering. I don’t know how complete a format mkfs does under Linux–if it only did the rough equivalent of wiping out the root directory, almost all the data will still be there. If it actually zeroed out all the bytes on the disk, well, technically a squid can supposedly still recover it, but I would imagine pricing would be based on how hard it is to recover the data. Last time someone I know checked it was surprisingly inexpensive–although it probably won’t be cheap.
It happened far too quickly to have wiped out a 2T drive. They’ll give me a free estimate, then I’ll decide whether to go with them. Hoping it won’t be more than a couple hundred bucks.
If in fact you reformatted a partition, you might have to unmount it and then use a utility like Photorec to sift through the flotsam and jetsam of the disk, looking for files that it recognizes. Supposedly it can find a wide array of file types. I’ve installed it but never used it. Of course, this only works if it was a “quick” format that merely sets up directories and doesn’t write a lot of stuff to disk.
Right, it will now be more difficult to retrieve the data, but I notice that /dev/sdb1 was formatted as ext4, whereas the original data was on LVM. That might mean that the original ext4 metadata was not entirely overwritten, since it would be in a different place relative to the new, non-LVM ext4 filesystem. As other commenters have mentioned, a data-recovery service might be your best bet. Almost all of the data blocks are still intact, and probably most of the metadata too, but stitching it together will be tricky.
You could try searching for ext4 superblocks across the entire device. There should be several sets. No harm in trying a read-only mount on one of the old ones, corresponding to any ext4 filesystems created in the former LVM device.
Ubuntu has some data-recovery advice here:
https://help.ubuntu.com/community/DataRecovery
I haven’t used foremost or scalpel, though it sounds like they can just be given the entire image to root through.
Problem is that I don’t have an image. I suppose I could try to create one with ddrescue, but it will take another day or so.