DEBUGSFS DUMP FULL CURROPT VOLUME TO DIRECTORY SCRIPT (READ ARTICLE BELOW FIRST)

NOTE: first read article below

First put in a USB /samba share / ntfs / iscsi and note its full absolute pathname. Remove the last slash and insure first slash is there and put that into BACKUPLOCATION variable.
THE_FS variable change that match the curropt volume
Here is the script
#!/bin/bash
################################################
# UPDATE - 1/10/2014
# DESCRIPTION: RDUMP CURROPT FILESYSTEM TO USB
# USING DEBUGFS WITH CATASTROPHIC MODE
# THIS WILL NOT CARE ABOUT METADATA AS MUCH
# requirements: debugfs, sed, egrep, awk
################################################
# ONLY THING TO CHANGE: BACKUPLOCATION TO WHERE YOUR DUMPING DATA (note share names preserved on dump)
# NOTE: BACKUPLOCATION STARTS WITH A / AND ENDS WITHOUT A /
# YOU CAN ADD EXTRA ARGS TO THE DEBUGFS IF NEEDED (SOMETIMES JUST -c DOESNT WORK)
################################################
# Change /dev/sda1 to match your volume name
# BEFORE RUNNING THIS TEST LIKE SO
# debugfs -R ls -c /dev/sda1
# MAKE SURE ALL YOUR FOLDERS SHOW UP WITH THIS:
# debugfs -R ls -c /dev/sda1 | sed -e 's/)/\n/g' | egrep -i "[:letter:]" | awk '{print $1}'
#################################################
BACKUPLOCATION="/mnt/_BACKUP" # <---------------------------------- change this to match your dump location
THE_FS="/dev/sda1" # <--------------------------------------------- change this to match your curropt volume
OTHER_OPTIONS=""; # if you need to add extra args to the debugfs script
cd ${BACKUPLOCATION}
echo FROM ${THE_FS} GOING TO BACKUP THIS:
debugfs -R ls -c /dev/c/c
for i in `debugfs ${OTHER_OPTIONS} -R ls -c ${THE_FS} | sed -e 's/)/\n/g' | egrep -i "[:letter:]" | awk '{print $1}'`
do
echo "WORKING ON: ${i}"
time debugfs -R "rdump /$i ${BACKUPLOCATION}" -c ${THE_FS}
du -hc ${BACKUPLOCATION}/${i} | nl > ${BACKUPLOCATION}/du-${i}.txt 2>&1 &
done

 HOW TO USE DEBUGFS TO RECOVER A CURROPT EXT FILESYSTEM

 
PLEASE READ THIS ALL THE WAY BEFORE DOING ANYTHING – I MENTION SOME KEY FACTS IN THE MIDDLE WHICH I FEEL LIKE I SHOULD OF MENTIONED IN THE BEGINNING 🙂
 
ALSO PLEASE FORGIVE ME IF ALL OF THE SUDDEN I JUST GO VERY BASIC ON YOU, THIS IS JUST MY RANDOM STYLE. IN THE END IM VERY THOUROUGH I PROMISE – ALMOST TO THOUROUGH.
 
NOTE I TRY TO PREPEND NEW LINES THAT ARE COMMANDS WITH A #, I WILL NOT PUT # TO COMMANDS IN THE MIDDLE OF A SENTENCE AND I WILL NOT PUT # ON A SCRIPT
 
Some filesystems get so curropt that a simple mount doesnt work. Even mounting with other superblocks doesnt work. A filesystem check gets way too many errors, pages and pages of errors. Its horrible situation but possible to get out of, kinda, not guaranteed to recover anything.
 
PRESTEPS: if the filesystem is not too curropt and you have it mounted, then unmount it. In this example Im going to unmount sda1 as thats the fs we are working on, if that is your root filesystem, then pop in a linux recovery system like Knoppix and work from there. The Knoppix will give you another root environment to work off so that you can unmount sda1 or whatever your system is.
 
Run a filesystem check with no fixing thats the -fn option (remember the filesystem check and repair -fy runs an automatic fix thats kind of like a blender, you end up with mumbojumbo, sometimes its good, sometimes it bad, and sometimes the stuff is in the lost+found – the safe assumption is that the end result is its bad so thats why before doing a filesystem check and repair, thats the “-fy” option, always backup/clone the disks or available current data)
 
The way I run the Filesystem check is like this (for this article Im pretending the data is on sda1):
 
# fsck -fn -C0 /dev/sda1

 

The -fn makes sure that we are safe and only check the filesystem and dont repair it. Remember the whole purpose it to try to do any write operations. The -C0 gives a percentage progress bar.
 
Better then that is to run the filesystem check and repair in a “nohup & ” wrapper, which runs the command in the background and outputs the screen output to a file called nohup.out, this file goes to the directory from where you ran it (there is a way to redirect that file elsewhere and thus name it something else, just google search redirect nohup)
 
# cd /
# nohup fsck -fn -C0 /dev/sda1 &

 

 Hit enter twice for this one, and then you will be back at the bash screen while the command is running safely in the background.
 
To see the output of the fsck with nohup, just go into the nohup file like this:
 
# tail -f /nohup.out

 

Thats why I cd to / before running nohup so that the nohup.out file automatically goes to / for easy finding.
 
Cancel the view at anytime with Control+c, it doesnt hurt it, to cancel the fsck you can “killall fsck ” or “killall -9 fsck ” or find out the PID of the fsck with “ps ” or “ps aux ” or “ps -aux ” and then kill it with “kill PID# ” or “kill -i PID# ” Where PID# you actually put the PID number. So if for example the PID for my fsck was 555, I would first try “kill 555 ” then I would try the “ps ” commands and see if its still there, if it didnt get killed then do “kill -9 555
 
Also run the following commands:
 
# dumpe2fs -h /dev/sda1

 

That spits out the header information of the dumpe2fs filesystem dump. In there we are looking for the following information:
 
Block Count, Free Blocks, Block Size this will give us an idea of how much data we are going to recover.
 
The formula is: Total Data = (Block Count – Free Blocks) * Block Size 
Remember that the Block Size is given in bytes, so if its 4096 then that means its 4096 bytes.
After crunching that formula down you will be left with how many bytes big is your system.
Also we are looking at
 
From the dumpe2fs -h /dev/sda1  we are also looking at the “filesystem state”, if it says anything other then “clean” then you have problems. The best status check of a filesystem is the above “fsck -fn -C0 /dev/sda1 ” command. However fsck -fn -C0 /dev/sda1 takes time and dumpe2fs -h is instant. Also a state of clean with errors can be bad or okay, bad meaning your FS wont mount and okay meaning there are some errors but it still mounts. (If the FS mounts, I would just backup the data from a readonly mount of the filesystem, if its currently mounted then remount it like so: “mount -o remount,ro /dev/sda1 “, and then backup the data – ifs it not yet mounted like for example you have your unit booted into a recovery system like knoppix then do “mount -o ro /dev/sda1 /randommountpoint1 ” – you can always make your random mount point be named whatever you want)
 
Next lets get all the locations of the superblocks (the locations were the inode tables are at – remember the inodes are the data that explain the data, its like the filename and the properties of the file, not the actual data it self)
 
# dumpe2fs /dev/sda1 | grep -i "superblock"
# mke2fs -n /dev/sda1

We are looking for a list that looks like this:

 
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

 

And I want you to turn it into a list that looks like this – only space delimited:
 
32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544

 

Set this big number into a variable:
BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"

Then we can refer to this at any time by typing $BIGLIST or ${BIGLIST}

 
We are going to use this list later to try several different recoveries. First we will try to recover without pointing at an alternate superblock and then we will try every superblock listed above – not manually, but with a for loop.
 
If the problem is that your filesystem is not mounting, you can try something like this, if this works you can stop right here and backup your data and your done – at any point you have access to the data and you can successfully extract it out, you can stop with following this article:
 
for i in $BIGLIST; do
echo "===Trying to mount with SUPERBLOCK: $i===="
mount -o sb=$i /dev/sda1 /randommountpoint1
done

 

Or shrunk down to one line (notice how I place the ;, one at the end of every line not including the “do” on the first line):
 
for i in $BIGLIST; do echo "===Trying to mount with SUPERBLOCK: $i===="; mount -o sb=$i /dev/sda1 /randommountpoint1 ; done;

If none of that worked then we are left to debugfs – this command is the debug filesystem tool for the ext filesystem. I prefer to only use it in catastrophic mode (the -c option) as its the last case resort for me before I give up (to using something like foremost or Photorec).

 
An exerpt from the man page explains what dash -c (catastrophic mode)  does:
 
"Specifies that the file system should be opened in catastrophic mode, in which the inode and group bitmaps are not read initially. This can be useful for filesystems with significant corruption, but because of this, catastrophic mode forces the filesystem to be opened read-only."

 

 
First of all though I should probably show how the manpage describes debugfs: 

"The debugfs program is an interactive file system debugger. It can be used to examine and change the state of an ext2, ext3, or ext4 file system. device is the special file corresponding to the device containing the file system (e.g /dev/hdXX)."

 

It also tells us that you shall use the command in this syntax / Synopsis: “debugfs [ -Vwci ] [ -b blocksize ] [ -s superblock ] [ -f cmd_file ] [ -R request ] [ -d data_source_device ] [ device ] “
 
First of all I only use the -c, -R, -s, and -b options.
 
Final man page exerpts:

 

-b blocksize 
Forces the use of the given block size for the file system, rather than detecting the correct block size as normal. 

-s superblock 
Causes the file system superblock to be read from the given block number, instead of using the primary superblock (located at an offset of 1024 bytes from the beginning of the filesystem). If you specify the -s option, you must also provide the blocksize of the filesystem via the -b option. 

-R request 
Causes debugfs to execute the single command request, and then exit....

 

The plan of action is as such:
 
1. Setup a mount destination location (I wont go deep into this, either just mount a USB drive, or a network share – whichever way you use, make sure it has enough space to store what we calculated above in “Total Data”)
 
2. Create a subdirectory in the mount destination to where you will dump everything to, this is optional im just OCD about organization of folders
 
3. We will try to enter the debugfs using regular methods without specifiying any alternate superblock or alternate blocksize, if that fails we will run a script to find the winning combination.
 
4. I will show you how to use debugfs as a script, because debugfs is regularly setup like a prompt program such as ftp so that you have to type commands into it, so this will be nice to do lots of mass operations – especially since debugfs has one big catch to it (its more of an annoying catch)
 
What thats catch of debugfs? And with this I will explain the rdump command of debugfs.
 
Imagine our /dev/sda1 filesystem has the following folders on the root: media, etc, sys, var, home, data
 
Well with debugfs you cant say extract (or dump – and if you wanna be more technical rdump for recursive dump) all of /dev/sda1, you have to instead specify each folder one by one. However since we have the ability to recursive dump all we have to do is one rdump for media, and one for etc, and one for sys, and one for var, and one for home, and one for data… Its easier to automasize that with a script. Just incase you are wondering one rdump will extract all of the contents of the media folder.
 
Let me also give you an example of how the rdump allocates its output, because to me its important if it makes the folder or not most people dont worry about that, but I hate to mix my directories up.
 
For example, If im located in an empty directory called /destination which is what my USB disk is mounted to. Lets say I say “rdump /media . “. (sidenote about syntax im telling rdump to take the /media folder from /dev/sda1 and dump it to here – hence the dot . – here being the /destination directory in which I am in, which is the USB). Well the good news is I will get the media folder there, and not the insides of the media folder. So in the end I will have a /destination/media folder with all of its correct contents.
 
If your wondering is there a way to do “rdump / . ” The answer is no. Thats a lack of the program, so you have to rdump every folder and file on the root one by one as already explained above. But thats not a big deal – especially since noone in their right mind should have over a dozen folders (and over a dozen files) on the root of their filesystems, if they do thats fine thats just more rdump commands.
 
So for the scenario above I would just do:
"rdump /media ."
"rdump /etc ."
"rdump /sys ."
"rdump /var ."
"rdump /home ."
"rdump /home ."

So Lets begin.

 
THE STEPS
#########
 
(Step 0) 
 
Make the mnt directory if its not there, if it is there and something else is mounted there, then unmount it. Check mounts with “mount” and unmount things with “umount …” look at google for more info on that.
 
mkdir /mnt

 

(Step 1)
Mount something to put the DATA to either a Remote share or USB
 
Mount USB
 
Plug in your USB and see if it shows up and how it shows up
# lsusb
# cat /proc/partitions
# dmesg

Whatever letter & partition that your USB gets, Ill just call that letter “b” and that partition number 1

# mount /dev/sdb1 /mnt

–or if your mounting a share—

 
LIST SHARES – IF ASKED FOR PASSWORD PUT IN THE PASSWORD OF A USER THAT HAS ACCESS TO THAT SHARE, IT WILL ENUMERATE THE SHARES IF YOU GAVE CORRECT CREDENTIALS
 
# smbclient -L remote-ip
# smbclient -L remote-ip -U username

MOUNT THE SHARE

# mount -t cifs //remote-ip/sharename /mnt
# mount -t -o user=username //remote-ip/sharename /mnt

 

MAKE SURE THAT THE DESTINATION HAS ENOUGH SPACE TO COVER THE debugfs, AS LONG AS YOU HAVE MORE SPACE THEN THAT NUMBER WE CALCULATED ABOVE “Total Data” THEN YOUR SET
 
# df
# df -h

or both at once:

# df && df -h

(Step 2)

Create the optional directories to dump to
 
# mkdir /mnt/dump

(Step 3)

Enter debugfs: -c catastrophic mode – this mode tries its best to recover the files, without catastrophic mode even debugfs wont work
 
FIRST GO TO FOLDER WHERE YOU WANT TO DUMP THE RECOVERY TO
 
# cd /mnt/dump
# debugfs -c /dev/sda1

 

DEBUGFS OPENS UP A NEW PROMPT, IN IT TYPE THE FOLLOWING (ONLY TYPE THE STUFF AFTER THE debugfs: PART)
 
FIRST – LIST THE CURRENT DIRECTORIES WE WILL ATTEMPT TO RECOVER
# debugfs: ls

OUTPUT OF LS IS SUPPRESSED HERE BECAUSE IM MAKING THIS EXAMPLE UP AS I GO (You should see the folder name, the inode in some parenthises and some other name, we are just worried about the name of the folder)

 
LETS RECOVER THE media FOLDER USING rdump [filesystem directory] [local directory – dump destination], rdump STANDS FOR RECURSIVE DUMP. WE TELL IT THE FOLDER TO DUMP FROM /dev/sda1 WHICH IN THIS CASE IS /backup (WHICH REFERS TO THE FOLDER /c/backup, WHICH IS THE BACKUP SHARE)  AND THEN WE TELL IT WHERE TO DUMP TO WHICH IS . (WHICH IS CURRENT WORKING DIRECTORY – /mnt/cdump – REMEMBER WE cd INTO THIS FOLDER BEFORE RUNNING debugfs -c)
 
# debugfs: rdump /backup .

IF IT CAN RECOVER IT WILL, IT WILL ALSO TAKE FOREVER ON BIG FOLDERS, IT WILL COME UP WITH SOME PERMISSIONS ERRORS, JUST IGNORE THOSE. IT WILL RECOVER ALL THAT IT CAN

 
Now when that is done just repeat the “ls ” command and the “rdump /[folder or file] . ” commands until you have all of it
REPEAT FOR ALL THE FOLDER THAT YOU SEE FROM “debugfs: ls ” OUTPUT WHEN ITS DONE JUST DO THIS: “debugfs: quit ” TO EXIT. WHILE ITS COPYING YOU CAN DO STEP 4 BELOW TO WATCH THE PROGRESS.
 
WHAT IF STEP 3 DIDNT WORK: SPECIFICALLY THE NORMAL SUPERBLOCK DIDNT WORK
###############################################
 
I know that the manpage blabs about the catastrophic -c option, how “the inode and group bitmaps are not read initially” but I still try it with different superblocks.
 
If debugfs -c didnt return any ls information we need to run through a loop. Remember that list of superblock numbers I had you get.
 
Type the following to see that list again:
 
# echo $BIGLIST
32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544

 

Did you forget how to get the $BIGLIST?
Let me recap this from the beginning(not the whole beginning but just how I got biglist):
Run “mke2fs -n /dev/sda1 ” or “dumpe2fs /dev/sda1 | grep superblock ” which should give you a list of superblocks, convert that list to the following command and hit enter after you type it or paste it in:\
 
# BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"

 

So lets try to use the debugfs command with all those superblocks, and lets have debugfs automatically do the “ls” command for us so that we dont have to run it, thats the -R switch. The -s switch is where we will try all the different superblock numbers from $BIGLIST. Also we will try a few different filesystem block size numbers, Im familiar with 4K filesystem blocks and 16K filesystem blocks, so those translated to bytes are 4096 and 16384 respectively- you can try your own if you want:
 
BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"
for z in 4096 16384; do
for i in $BIGLIST; do
echo "====BLOCK SIZE: $z==SB: $i===="
debugfs -s $i -b $z -R "ls" -c /dev/sda1
done
done

Or shrunk to one line:

 
BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"
for z in 4096 16384; do for i in $BIGLIST; do echo "====BLOCK SIZE: $z==SB: $i====" && debugfs -s $i -b $z -R "ls" -c /dev/sda1; done; done;

Or if you dont wanna use $BIGLIST variable and just do it all in one:

 
# for z in 4096 16384; do for i in 32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544; do echo "====BLOCK SIZE: $z==SB: $i====" && debugfs -s $i -b $z -R "ls" -c /dev/sda1; done; done;

 

So okay great you ran one of these commands what do you do then…. Well hopefully one of the commands returned a folder and file listing of some sort, then you can use it to your advantage to enter back into debugfs with that magical superblock number and block size number.
 
For example lets say I just ran the script above and it returned a whole bunch of nothing until I got to the line
 
=====BLOCK SIZE: 4096===SB: 11239424====
 
From which is just had the full file listing below it, similar to this (Sorry if this doesnt follow the past examples of what folders are contained on the root of /dev/sda1 , this is to illustrate that this returned superblock and block size are winning combination):
 
=====BLOCK SIZE: 4096===SB: 11239424====
 2  (12) .    2  (12) ..    11  (20) lost+found    29532161  (16) media
 46256129  (16) backup    14548993  (12) home    3022849  (12) Alpha
 12290  (20) aquota.user    12291  (20) aquota.group
 88870913  (12) Bravo

 

 
Now that we know the winning block size is 4096 and superblock # is 11239424
 
We can start debugfs like this:
 
# debugfs -b 4096 -s 11239424 -c /dev/sda1

 

Then do your magic from there.
 
As a sidenote you dont have to do specify both the superblock and blocksize, you can have it try to figure one or both out, the next 3 are perfectly legal commands:
 
# debugfs -b 4096  -c /dev/sda1
# debugfs -s 11239424 -c /dev/sda1
# debugfs -c /dev/sda1

 

If all 4 work, thats fine, pick whichever, they all should have the same results.
 
The final one is the original from Step 3.
 
Once inside you do the regular “ls” and “rdump /[folder] .” or your choice. For example to recover the above system I would do the following
 
cd /mnt/dump
debugfs -b 4096 -s 11239424 -c /dev/sda1
ls
rdump /lost+found .
rdump /media .
rdump /backup .
rdump /home .
rdump /Alpha .
rdump /aquota.user .
rdump /aquota.group .
rdump /Bravo .

 

Of course you would have to wait forever in between each to start the next one, which bring me to the next subject – script this stuff so you dont have to wait.
 
GREAT NOW LETS SCRIPT WITH THIS
###############################
 
So lets just jump right in, and then Ill explain, lets say I want to do the above in a script so I dont have to wait – As your eyes can already forecast and foresee, I will restate the above here in the NOT SCRIPT section, and then jump into the script in the SCRIPT section:
 
NOT SCRIPT – ORIGINAL – WHAT WE DONT WANT BECAUSE IT WAITS FOR US TO TYPE IN A NEW COMMAND EVERY TIME (remember we dont have to write the “debugfs: ” prompt part, thats already there):
cd /mnt/dump
debugfs -b 4096 -s 11239424 -c /dev/sda1
debugfs:  ls
debugfs: rdump /lost+found .
debugfs:  rdump /media .
debugfs:  rdump /backup .
debugfs:  rdump /home .
debugfs:  rdump /Alpha .
debugfs:  rdump /aquota.user .
debugfs:  rdump /aquota.group .
debugfs:  rdump /Bravo .

 

 
SCRIPT – FINAL:
cd /mnt/dump
debugfs -b 4096 -s 11239424 -R "ls" -c /dev/sda1
debugfs -b 4096 -s 11239424 -R "rdump /lost+found ." -c /dev/sda1
debugfs -b 4096 -s 11239424 -R "rdump /media ." -c /dev/sda1
debugfs -b 4096 -s 11239424 -R "rdump /backup ." -c /dev/sda1
debugfs -b 4096 -s 11239424 -R "rdump /home ." -c /dev/sda1
debugfs -b 4096 -s 11239424 -R "rdump /Alpha ." -c /dev/sda1
debugfs -b 4096 -s 11239424 -R "rdump /aquota.user ." -c /dev/sda1
debugfs -b 4096 -s 11239424 -R "rdump /Bravo ." -c /dev/sda1

 

You can just select all of those copy it and paste it right in, you will get the following data structure afterwards:
 
/mnt/dump/lost+found
/mnt/dump/media
/mnt/dump/backup
/mnt/dump/home
/mnt/dump/Alpha
/mnt/dump/aquota.user
/mnt/dump/Bravo

 

You can also combine the above commands into one pasteable line, instead of one pasteable chunk of commands:
 
The following 2 just do seperately:
# cd /mnt/dump
# debugfs -b 4096 -s 11239424 -R "ls" -c /dev/sda1

Then combine to 1 line:

# debugfs -b 4096 -s 11239424 -R "rdump /lost+found ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /media ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /backup ." -c /dev/sda1;debugfs -b 4096 -s 11239424 -R "rdump /home ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /Alpha ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /aquota.user ." -c /dev/sda1; debugfs -b 4096 -s 11239424 -R "rdump /Bravo ." -c /dev/sda1;

 

JUST FOR THE CURIOUS: if you dont wanna specify -b and -s, if it works without specifying the superblock and block size then you can simply do this:
 
cd /mnt/dump
debugfs -R "ls" -c /dev/sda1
debugfs -R "rdump /lost+found ." -c /dev/sda1
debugfs -R "rdump /media ." -c /dev/sda1
debugfs -R "rdump /backup ." -c /dev/sda1
debugfs -R "rdump /home ." -c /dev/sda1
debugfs -R "rdump /Alpha ." -c /dev/sda1
debugfs -R "rdump /aquota.user ." -c /dev/sda1
debugfs -R "rdump /Bravo ." -c /dev/sda1

 

Or in 1 command style:
 
First do these:
# cd /mnt/dump
# debugfs -R "ls" -c /dev/sda1

 

Then do this single line:
# debugfs -R "rdump /lost+found ." -c /dev/sda1; debugfs -R "rdump /media ." -c /dev/sda1; debugfs -R "rdump /backup ." -c /dev/sda1; debugfs -R "rdump /home ." -c /dev/sda1; debugfs -R "rdump /Alpha ." -c /dev/sda1; debugfs -R "rdump /aquota.user ." -c /dev/sda1; debugfs -R "rdump /Bravo ." -c /dev/sda1;

 

 
Thats pretty much all of the important notes I have on debugfs, here is how to use the console portion of the debugfs (the none script part of debugfs, running it without R as we did in step 3):
 
HOW DEBUGFS IS USED:
####################
 
debugfs operates like this: it uses commands similar to ftp command: A quick recap, do local commands with a prefix of the letter l or !. Example !pwd  will tell me the current working directory I will dump to. !pwd  returns “/mnt/dump” and !ls  lists nothing because there is no folders in /mnt/cdump. Now a simple ls will list all the folders on the root of /dev/sda1 filesystem, so it lists the following for me:
 
# debugfs -c /dev/sda1
debugfs 1.41.14 (22-Dec-2010)
/dev/sda1: catastrophic mode - not reading inode or group bitmaps
debugfs:
debugfs:  ls
 2  (12) .    2  (12) ..    11  (20) lost+found    29532161  (16) media
 46256129  (16) backup    14548993  (12) home    3022849  (12) Alpha
 12290  (20) aquota.user    12291  (20) aquota.group
 88870913  (12) Bravo

 

 
Other local commands use the l prefix as I mentioned for example we are in the /mnt/dump directory, what if I wanted to change to /mnt/other/ so that I can dump the files there: 
lcd /mnt/other “
 
Type “help” to get a list of all the options.
 
(STEP 4) Optional to watch the progress!!
 
To watch the progess, open another shell, or if your using screen open another screen, or if your using detach then detach or whatever:
 
watch -n0.5 "df && df -h"

BEFORE BACKING UP ITS A GOOD IDEA TO RUN “df && df -h” TO SEE THE SIZE IN KILOBYTES AND HUMAN READABLE FORM OF THE DESTINATION – HOPEFULLY ITS CLOSE TO EMPTY. THEN YOU HOPEFULLY KNOW THE SIZE OF THE DATA WE WANT TO BACKUP IN /dev/sda1.

 
IF YOU DIDNT RUN df && df -h BEFORE THE DEBUGFS COMMAND THATS FINE, REMEMEBER WE STILL HAVE OUR OUTPUT OF dumpe2fs -h /dev/sda1 AND THE WHICH WE USED TO CALCULATE Total Data, THAT NUMBER SHOULD BE IN BYTES.
 
TO RECAP: RUN “dumpe2fs -h /dev/sda1” BEFORE RUNNING DUMPE2FS AND GET THE FOLLOWING NUMBERS: Block count,Free blocks, and Block size. Block size is usually 4096, meaning 4096 bytes, or 4 Kilobytes. So do the following math TO FIND OUT THE AMOUNT OF DATA BACKING UP WE WILL BE TARGETING (NOTE THIS IS NUMBER WE WANT OUR WATCH COMMAND TO REACH, CONSIDERING YOU STARTED WITH AN EMPTY DESTINATION DEVICE/SHARE): ([block count] – [free blocks])*[block size]=[total amount of data in kilobytes]
 
SUMMARY VIA QUICK FULL EXAMPLE
##############################
 
Since this is alot to take in, and I have a long way of writing, let me throw this in, an example taken from the beginning – this is a script style example.
 
Scenario: Root filesystem failed, linux machine doesnt boot up.
 
1. Download Knoppix on another PC and burn it to a CD
 
2. Pop in Knoppix to PC with problem, and start it up and open up a terminal shell
Run the following commands to identify what is your main curropt filesystem and how its labeled
# dmesg | grep "[hs]d[abcdefghijklmnopqrstuvwxyz]"
# cat /proc/partitions

 

 
Lets say the filesystem in this case was also /dev/sda1
 
Hopefully you can get the information of the Filesystem Size, this is optional its just so that when the backup is happening we know when its close to done:
 
# dumpe2fs -h /dev/sda1

We get the following information:
Block count:              1459093504
Free blocks:              410639656
Block size:               4096

PLUGGING IN TO MY FORUMLA GIVES: (1459093504-410639656)*4096=4.294467e+12 bytes 

WHICH BEGS ME TO SHOW YOU WOLFRAMALPHA FOR CONVERSION OF UNITS (its an amazing calculator)…
GO TO www.wolframalpha.com AND TYPE “(1459093504-410639656)*4096 bytes ” IN THE BOX AND HIT ENTER
ONE OF THE THE ANSWER IS: 4.29 TB – NOTE ALL OF THE ANSWERS ARE CORRECT, IT SHOWS YOU LOTS OF FORMS OF THE CORRECT ANSWER WHICH IS WHY I LIKE IT
 
3. Mount the destination – where we will dump the damages to – I will show you this example in the USB sense and Mounting Share sense
 
3a. I wanna backup all my Stuff to a USB – plug in USB (the USB you found that has 5 TB of storage lol) and run the following commands to identify the USB, lets pretend in this case its sdb1.
 
# dmesg | grep "[hs]d[abcdefghijklmnopqrstuvwxyz]"
# cat /proc/partitions
# mount /dev/sdb1 /mnt

 

3b. I wanna backup all my stuff: First on a Windows machine (IP addres 10.10.10.10) that has enough space to cover the Total Data of 4.29 TB, I make a folder called “sally” on the volume that has enough space for the 4.29 TB, I right click on the folder and enable sharing on it. I make sure sharing is enabled full control for everyone, but I limit my security to a user called “fred” with a password “12345678” and full control for user “fred”. Then on linux I do the following:
 
# smbclient -L 10.10.10.10

 

OR if it wants a username give it “fred”
 
# smbclient -L 10.10.10.10 -U fred

 

If asked for password just try the 12345678 that is “freds” password. It should show me the sally share I made.
 
I mount the share with this
 
# mount -t -o user=username //10.10.10.10/sally /mnt

 

4. Make the subdirectories for organization – optional I just like to have folders within folders within folders – folderception
# mkdir /mnt/dump

5. Get into the folder

 
# cd /mnt/dump

6.  Debug FS time: Get the file listing

# debugfs -R "ls" -c /dev/sda1

 

FAIL!!! Oh no!!, well lets try another superblock
 
7. Find out the superblock numbers:
# mke2fs -n /dev/sda1

 

I take the output of the superblocks and put them in a notepad, and then remove the comas and newlines, add in double quotes until I get:

32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544 ” with the quotes
 
8. Make the BIGLIST Variable out of it
# BIGLIST="32768 98304 163840 229376 294912 819200 884736 1605632 2654208 4096000 7962624 11239424 20480000 23887872 71663616 78675968 102400000 214990848 512000000 550731776 644972544"

 

 
9. Run the following scriptlet:
# for z in 4096 16384; do for i in $BIGLIST; do echo "====BLOCK SIZE: $z==SB: $i====" && debugfs -s $i -b $z -R "ls" -c /dev/sda1; done; done;

 

In this case I get a file listing with 16394 block size and superblock 819200
 
10. Debug FS time: Get the file listing – revisted but not failed, unlike step 6:
# debugfs -b 16394 -s 819200 -R "ls" -c /dev/sda1

 

We get a file listing similar to the above article – just as an obvious side note for the confused, this is the same listing we see in step 9 when we find the correct superblock and block size:
 2  (12) .    2  (12) ..    11  (20) lost+found    29532161  (16) media
 46256129  (16) backup    14548993  (12) home    3022849  (12) Alpha
 12290  (20) aquota.user    12291  (20) aquota.group
 88870913  (12) Bravo

 

11. So lets say I wanna extract out as much as I can of the following folders: lost+found, media, backup, home, Alpha, and Bravo
 
I can copy paste this giant code in, or even write it into a bash script: 
debugfs -b 16394 -s 819200 -R "rdump /lost+found ." -c /dev/sda1
debugfs -b 16394 -s 819200 -R "rdump /media ." -c /dev/sda1
debugfs -b 16394 -s 819200 -R "rdump /backup ." -c /dev/sda1
debugfs -b 16394 -s 819200 -R "rdump /home ." -c /dev/sda1
debugfs -b 16394 -s 819200 -R "rdump /Alpha ." -c /dev/sda1
debugfs -b 16394 -s 819200 -R "rdump /aquota.user ." -c /dev/sda1
debugfs -b 16394 -s 819200 -R "rdump /Bravo ." -c /dev/sda1

 

Or I can shrink this down to one command and paste it in as well – I would rather do it this way, since with the above way sometimes the last command doesnt run if you like dont select the final newline character, so in my opinion this next command is the best way to do it:
 
# debugfs -b 16394 -s 819200 -R "rdump /lost+found ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /media ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /backup ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /home ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Alpha ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /aquota.user ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Bravo ." -c /dev/sda1;

 

 
As quick tip, we could nohup it and put it into the background
 
# nohup (  debugfs -b 16394 -s 819200 -R "rdump /lost+found ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /media ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /backup ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /home ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Alpha ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /aquota.user ." -c /dev/sda1; debugfs -b 16394 -s 819200 -R "rdump /Bravo ." -c /dev/sda1; ) &

 

 
Follow the output with “tail -f nohup.out” the nohup file in this case will be in /mnt/dump since thats where we ran the command from. 
 
12. To follow the progress just do the following:
# watch -n0.5 "df && df -h"

 

You know its done when you reach the 4.29 TB or whatever size of Total Data was.
 
13. When your done, just unmount your USB or Share
Type sync: just to insure all the writes are finallized and synced over across the system.
# sync
# cd /
# umount /mnt/

 

Leave a Reply

Your email address will not be published. Required fields are marked *