Open STD IN in VIM

command | vim –

Or:

vim <(command)

Examples:

echo “stuff” | vim –

vim <(df)

It says open file is stdin (instead of filename)

NOTE: can use -R for read only: commmand | vim -R –

OR: vim -R <(command)

When you save it, you get to pick a file name

:w /some/save/location/text.txt

 

VIM: Save file by appending _BACKUP (or anyword) to current name

:w %:p_BACKUP

Note: in vim %:p means current filename (and path)

To help remember: think ‘p’ like ‘path’ and ‘%’ means ‘full’ in vim, so %:p full path

 

VIM Tricks

gg  Go to Top

G Go to Bottom (Shift-g)

dgg to delete to the top

dG to delete to the bottom

 

Remove Empty Lines

command | egrep -v “^$”

 

LINUX EXIT CODES:

Exit status 0 is good (The command completed no problem, AKA SUCCESS), anything else is an error (need to look up error in man page – the errors are like bit wise added/ORed)

command

echo “COMMANDS exit code is: $?”

Or can combine

command && echo “command worked” || echo “command didnt work exit code was $?”

 

BASH

command1 && command2 – this runs command2 only if command1 is succesfull (useful for success messages or continuing to next part of program)

command1 || command2 – this runs command2 only if command1 is failed (useful for fail messages or exiting if something happens)

command1 && command2 || command3 – this will run command1 and if it fails it will run command3, and if command1 is successful it will run command2. Note that command3 will run if command2 will fail.

Usually command2 and command3 are made such that doesnt happen, because we use “echo” commands for those, and those are always exit status 0 (success)

 

Replace every from-word to to-word

Command-with-output | sed “s/from-word/to-word/g”

echo “Your face is a from-word” | sed ‘s/from-word/to-word/g’

 

To Upper

command | tr ‘[:upper:]’ ‘[:lower:]’

tr ‘[:upper:]’ ‘[:lower:]’ < input.txt > output.txt

y=”THIS IS a TeSt”

echo “${y,,}”

dd if=input.txt of=output.txt conv=lcase

awk ‘{ print tolower($0) }’ input.txt > output.txt

perl -pe ‘$_= lc($_)’ input.txt > output.txt

sed -e ‘s/\(.*\)/\L\1/’ input.txt > output.txt

 

To Lower

command | tr ‘[:lower:]’ ‘[:upper:]’

tr ‘[:lower:]’ ‘[:upper:]’ < input.txt > output.txt

y=”this Is A test”

echo “${y^^}”

dd if=input.txt of=output.txt conv=ucase

awk ‘{ print toupper($0) }’ input.txt > output.txt

perl -pe ‘$_= uc($_)’ input.txt > output.txt

sed -e ‘s/\(.*\)/\U\1/’ input.txt > output.txt

 

LS SORT

ls -lisah (regular ls – easy to remember lisa with h, think simpsons)

ls -lisahSt (easy to remember think lisah street – S for sort and t for time – so sorts on time)

ls -lisahS (easy to remember think lisahs – like many lisas from simpsons – so it will sort because of the S by filesize or weight because so many lisas are around)

NOTE: I know I am rediculus

 

iftop best way to run

iftop -nNBP -i eth0

Then hit Shift-T

hitting T – will show totals column

n – wont resolve hostnames, N -wontresolve port names, B -everything in bytes instead of bits, P – shows the ports

Columns on the right side of iftop output/interface are then:

TOTALS,2 sec, 10 sec, 40 sec speeds

 

PS and top

ps

ps -ef –forest (remember forest is 1 r and 1 s)

ps awwfux  (think ahhh F-word)

ps aux

ps ax

Last 3 are good to see every command

top -c

top -c -n1 -b OR top -cbn1 OR top -cn1b

-c shows all arguments, n1 runs top once so good for shell & pipes, b will not print colors good for piping

Order doesn’t matter and can combine the args, just make sure after n is a 1 (run once), can put 2 but then it will run 2 times so its not good for shell and piping

 

In Depth with Head and Tail

SHOW TOP 5 LINES: head -n 5 OR head -n +5 OR head -5

SHOW BOTTOM 5 LINES: tail -n 5 OR tail -n -5 OR tail -5

OMIT TOP 5 LINES: tail -n +6 (THINK OF IT LIKE START AT SIXTH LINE AND GO TO BOTTOM – EVEN THOUGH IT DOESNT WORK LIKE THAT IN REALITY, THAT THINKING WORKS WITH IT. SO IN 7 LINE DOC YOU WILL ONLY SEE THE 6TH AND 7TH LINE – THE BOTTOM 2 LINES)

OMIT BOTTOM 5 LINES: head -n +5 (THINK OF IT LIKE, I WILL ERASE THE BOTTOM 5 LINES. SO IN 7 LINE DOC YOU WILL ONLY SEE THE TOP 2 LINES)

 

Print Columns

diff -y file1 file2

pr -m -tT file1 file2

pr  -W200 -m -tT -help file1 file2

pr  -W200 -m -tT -help <(df) <(df -h)

Note W and w do the same thing, page width (im sure there is a diff but not that we notice), if using W and w then the last one set (right most) is the one that will be read

 

Asking for file to read, but want commandline output

command <(command1) <(command2)

diff -y <(df -h) <(df)

 

BASH 3 and BASH 4 IO Redirections:

* fd0-stdin: in other word File Descriptor 0 is stdin

* fd1-stdout: in other word File Descriptor 1 is stdout

* fd2-stderr: in other word File Descriptor 2 is stderr

* Makes file 0 length, clears file or makes one (like touch)

: > filename

* Same as above: Makes file 0 length, clears file or makes one (like touch)

> filename   

* Redirect stdout to file “filename”

1>filename

* Redirect and append stdout to file “filename”

1>>filename

* Redirect stderr to file “filename”

2>filename

* Redirect and append stderr to file “filename”

2>>filename

* Redirect both stdout and stderr to file “filename”

&>filename

* This operator is now functional, as of Bash 4, final release.

In BASH3:

cmd > file.txt 2> file.txt

OR:

cmd > file.txt 2>&1

* Redirect and append stderr and stdout to file “filename”

&>>filename

In Bash3 you would have to do:

cmd >>file.txt 2>&1

UNDERSTANDING LAST ONE – COMMENT 1: Redirection statements are evaluated, as always, from left to right. “>>file” means redirect STDOUT to file, with append, short for “1>>file”. “2>&1” means redirect STDERR to “where stdout goes”. NOTE: The interpretion “redirect STDERR to STDOUT” is wrong.

UNDERSTANDING LAST ONE – COMMENT 2: It says “append output (stdout, file descriptor 1) onto file.txt and send stderr (file descriptor 2) to the same place as fd1”

 

Pipes (only redirect STDOUT of one process to STDIN of another – so STDERR goes away)

Redirect STDOUT and STDERR to the pipe (they will both turn into STDIN):

# command 2>&1 | command1

# command &| command1

* So STDERR is not piped thru to next command. Here is an example showing how to combine them (first it shows you the problem how stderr is not piped):

* First we need to generate some output that has text in both stdout (fd1) and stderr (fd2): # { echo “stdout”; echo “stderr” 1>&2; }

This redirects stderr (file descriptor 2) to stdout (file descriptor 1), e.g.:

# { echo “stdout123”; echo “stderr123” 1>&2; } | grep -v std

OUTPUT: stderr123

EXPLANATION: “stdout123” goes to stdout (fd1), “stderr123” goes to stderr (fd2). grep only sees “stdout123” (and we tell grep to not write anything that has “std” in it), hence “stderr123” prints to the terminal.

* To combine stdout (fd1) and stderr (fd2) you would redirect stderr (fd2) to stdout (fd1) using 2>&1.

On the other hand:

# { echo “stdout123”; echo “stderr123” 1>&2; } 2>&1 | grep -v std

OUTPUT is blank (meaning that both stdout123 and stderr123 were piped to grep). stdout123 was in fd1 (stdout) and stderr123 was in fd2 (stderr) then they got combined (by merging/redirecting fd2 to fd1) then piped to grep and a pipe only sends fd1 (stdout) so grep saw both output.

After writing to both stdout and stderr, 2>&1 redirects stderr back to stdout and grep sees both strings on stdin, thus filters out both.

 

Command Substitution

* User output of ls as arguments of echo:

echo $(ls)

OUTPUT: total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email

* Same thing, Why not simply use:

# echo `ls -lrt`

OUTPUT: total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email

* Or much safer – use output of ls as only the 1st argument of echo (useful with other commands other then echo):

# echo “`ls -lrt`”

OUTPUT:

total 20

-rw-r–r– 1 root root 11153 Jan  4 12:07 everyl3

-rw-r–r– 1 root root  1667 Jan  4 12:33 everyl3org

-rw-r–r– 1 root root   781 Jan  4 12:37 everyl3email

* Or much safer – quotes are printed around output:

# echo \”`ls -lrt`\”

OUTPUT: “total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email”

 

FIND EXEC – run command on a found file

You can use XARGS, or you can use finds own -exec argument. The syntax is:

find [find options] -exec command1 {} \;

find . -type f -exec echo “THIS FILE: {}” \;

find . -type f -exec ls -i {} \;

Everywhere you see a {} will be where the filename goes. This command1 is ran once per found file.

Every exec statement ends with a \;. It tells where the command ends. Since ; ends full commands with bash, we need to tell it to escape ; with \;, thus -exec knows that the exec command is done (and not the whole find command as a whole is done)

You can use whatever find options you need, put the exec at the end.

Dont combine exec with xargs (the output of the exec command will be used as the arguments of the xargs, which we dont want – command outputs could be huge – and we just want the filename in xargs). If you need to use the output of one command on another then just use 2 xargs (or if you want pain you could use find . -exec comm1 {} \; -print0 | xargs -0 -I {} nextcommand {})

Final note, you dont need to use {}:

find . -type f -exec date \;

That will list the date as many times as it finds a file.

 

MULTIPLE FIND EXECS – run multiple commands on a found file

find [find options] -exec command1 {} \; -exec command2 {} \;

THIS WILL LIST THE FILE NAME AND TIME IT FOUND IT AT:

find [find options] -exec echo “FOUND THIS {} FILE AT:” \; -exec date \;

THIS WILL LIST FILE AND THEN DELETE IT (there is better option then this with xargs that will show result of delete, this is in the xargs section:

find [find options] -exec echo “DELETEING: {}” \; -exec rm -rf {} \;

This doesnt use the output of command1 into command2. This, as it looks, runs 2 commands. Since we cant tie commands like this “-exec command1 && command2 \;”

 

XARGS

NOTE: this shows that you should test every command (use echos instead of rms or cps or mvs) before running the command that will touch data

Use output of one command as the arguments of the next command (next command after the pipe)

SIMPLE EXAMPLE:

# command1

OUTPUT:

thing1

thing2

thing3

thing4

THEN RUNNING:

# command1 | xargs command2

Is the equivalent of running these:

command2 thing1

command2 thing2

command2 thing3

command2 thing4

Xargs will run the listed command with the given arguments (the output of the previous command thats piped to xargs) as many times as the first command has lines. Each line becomes an argument for xargs.

XARGS is used with find alot, find lists file names one by one, and then xargs can run commands on them one by one (more efficient the bulk operations – more later)

IF USING WITH FIND USE -print0 ON FIND and -0 ON XARGS – TREATS SPACES & SPECIAL CHARS – OR ELSE ERRORS

“-0” If there are blank spaces or characters (including newlines) many commands will not work. This option take cares of file names with blank space.

“-print0” in find does the same thing -0 does in xargs

PROBLEM and ERROR IF YOU RUN LIKE THIS: find -type f -iname “*core*” | xargs ls -lisah

NO PROBLEMS:

find -type f -name “*core*” -print0 | xargs -0 ls -lisah

 

XARGS PUTS ARGUMENT AT THE END – WHAT IF NEED IT ELSE WHERE or REUSE IT

-I occurances of input arguments go where you tell them. First specify what the replace-char is, it can be more then 1 char, like here its {}. So we specified that -I {}, so everywhere you see {} thats where the command arguments go instead of the end.

EXAMPLE1:

# find . -print0 | xargs -0 -I {} echo THIS FILE {} IS EQAUL TO THIS FILE {}

EXAMPLE2:

# find . -name “*.bak” -print0 | xargs -0 -I {} mv {} ~/old.files

{} as the argument list marker

 

NOTE SOMETIMES ITS BETTER TO USE -I {} THEN TO LET THE ARGUMENT FALL AT THE END

DOESNT WORK – BAD:

# find . -type f -print0 | xargs -0 echo “WOW”

WOW ./everyl3 ./everyl3org ./everyl3email

WORKS – GOOD:

# find . -type f -print0 | xargs -0 -I {} echo “WOW {}”

WOW ./everyl3

WOW ./everyl3org

WOW ./everyl3email

# find . -type f -print0 | xargs -0 -I {} echo “WOW” {}

WOW ./everyl3

WOW ./everyl3org

WOW ./everyl3email

 

XARGS multiple commands:

cat a.txt | xargs -I % sh -c ‘command1; command2;’

NOTE: Use as many commands as you want, we needed to call upon /bin/sh or /bin/bash would work too to use multiple commands -c argument in sh allows you to enter commands on the shell

NOTE: You can use % for the arguments in the commands, everywhere there is % thats where a.txt output will act as arguments. Its the same as the -I {} in previous examples, -I %. So the effect {} had is the same as %.

NOTE: This is a useless use of cat, xargs can take arguments from STDIN (you can also put STDIN in the front – or the back of a command it doesnt matter):

< a.txt xargs -I % sh -c ‘command1; command2; …’

NOTE: AGAIN Yes, the redirection can be at the beginning of the command.

 

 

XARGS BULK OPERATION: Remember I said its more efficient on certain bulk operations

XARGS FIXING OVERLOADING EXAMPLE1: Avoiding errors and resource hungry problems with xargs and find combo on a mass delete

Imagine your deleting millions of files big and small that are all sitting in the directory /sucks/

NOTE: can use any of the methods with the *, my favorite are the bottom to (they delete one by one so no memory or overloading issues) and they list deleted files (and if its failed or successful when delete finished)

NOTE: The none xargs methods are tabbed

* Well you could delete em like this:

# rm -rf /sucks/

* However the system could freeze and you wont know when you delete a file, an alternative is to use xargs with find (or just find by itself)

# find /sucks –delete

# find /sucks -exec rm -rf {} \;

* Have it tell you what file is deleting:

# find /sucks -exec echo “DELETING: {}” \; -exec rm -rf {} \;

* OR first files then empty dirs & everything else:

# find /sucks -type f -exec rm -rf {} \;

# find /sucks -exec rm -rf {} \;

* Or do the same thing and tell you what file its deleteing:

# find /sucks  -type f -exec echo “DELETING FILE: {}” \; -exec rm -rf {} \;

# find /sucks -exec echo “DELETING: {}” \; -exec rm -rf {} \;

* Or with XARGS:

# find /sucks -print0 | xargs -0 rm -rf

* Or with Xargs list and delete file:

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {}’

* Or with XARGS files first, then everything else:

# find /sucks -type f -print0 | xargs -0 rm -rf

# find /sucks -print0 | xargs -0 rm -rf

* Delete with XARGS and list file before delete and result of delete – using multiple Xargs:

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

* Delete with XARGS and list file before delete and result of delete – using multiple Xargs – first delete files and then everything else:

find /sucks -type f -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING FILE {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

XARGS FIXING OVERLOADING EXAMPLE2: Avoiding errors and resource hungry problems with xargs and find combo on a mass copy

To copy all media files to another location called /bakup/iscsi, you can use cp as follows:

# cp -r -v -p /share/media/mp3/ /backup/iscsi/mp3

However, cp command may fail if an error occurs such as if the number of files is too large for the cp command to handle. xargs in combination with find can handle such operation nicely. xargs is more resource efficient and will not halt with an error:

# find /share/media/mp3/ -type f -name “*.mp3” -print0 | xargs -0 -r -I file cp -v -p file –target-directory=/bakup/iscsi/mp3

 

Duplicates

Show Duplicates (output will be sorted):

command | sort | uniq -d

Remove Duplicates (output will be sorted):

command | sort | uniq

Remove Duplicates (not sorted – order preserved):

command | awk ‘!x[$0]++’

This command is simply telling awk which lines to print. The variable $0 holds the entire contents of a line and square brackets are array access. So, for each line of the file, the node of the array named x is incremented and the line printed if the content of that node was not (!) previously set.

 

While adding drives watch dmesg and cat /proc/partitions

If you have mdev and no watch command (those go hand in hand :.):

while true; do echo “—–“`date`”—-“; mdev -s; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have mdev and you do have watch:

while true; do echo “—–“`date`”—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have the better udev (basically its an automatic mdev that doesn’t need to be called) and you have watch:

while true; do echo “—–“`date`”—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have udev and no watch:

while true; do echo “—–“`date`”—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

 

How to Pipe STDERR and not STDOUT

Note typically its STDOUT which gets piped.

So we tell STDOUT to go to hell. And Tell STDERR to go to where STDOUT usually goes

command1 2>&1 > /dev/null | command2

Now command2 will receive command1 stderr

Normal operations are like this:

command1 | command2

Here command2 recieves only command1s stdout (not stderr)

 

Bash Time Diff Script

date11=$(date +”%s”)

<Do something>

date22=$(date +”%s”)

diff11=$(($date2-$date1))

echo “$(($diff11 / 60)) minutes and $(($diff11 % 60)) seconds elapsed.”

 

 

ifstat

Interface bandwidth continous scrolling output

(STEP 0) INSTALL

apt-get install ifstat

(STEP 1) COMMAND LINE

ifstat -bat

-b bytes instead of bits

-a every interface

-t time stamps

 

BYOBU (some will work in screen)

CONTROL-a %  (in other words CONTROL-a SHIFT-5) – splits current region vertically and makes a new bash session in it and puts you into the new region

CONTROL-a |  (in other words CONTROL-a SHIFT-\ ) – splits horizonatlly and makes a new bash session in it and puts you into the new region (in old version this splits | )

CONTROL-a S  (in old version this splits —————)

CONTROL-a TAB  move to next region

CONTROL-a “  select session with arrow keys and enter goes there

CONTROL-a X   closes sessions

CONTROL-a c  creates a new session (new tab) (note a tab can have multiple sessions split in either way)

CONTROL-a A   thats a capital a (in other words CONTROL-a SHIFT-a) name a tab

CONTROL-a ?  to see help, ENTER TO QUIT out of help

CONTROL-a {  (control -a shift-[) moves current window to another window

CONTROL-a [  copy mode move around with keys scroll essentially this way <– – ENTER without CONTROL a to get out of copy mode — need to repeat full key command to do again

CONTROL-a ]  copy mode move around with keys scroll essentially this way –>- ENTER without CONTROL a to get out of copy mode — need to repeat full key command to do again

 

SCREEN (some will work in BYOBU)

CONTROL-A then c – create new shell

CONTROL-A then S – split window with horizontal line —

CONTROL-A then | – split window with vertical line |

CONTROL-A then TAB – move over to next window

CONTROL-A then ” – see all windows

CONTROL-A then [ – copy mode (up down left right, page up page down move around in window)

CONTROL-A then X – close window

CONTROL-A then D – detach (processes still are running)

 

DNS with dig and nslookup

Find this                              with DIG                                               with NSLookup

Generic Syntax                    dig @server type name                          nslookup -q=type name server

A records for somehost         dig somehost.example.com                 nslookup somehost.example.com

MX records for somehost         dig mx somehost.example.com         nslookup -q=mx somehost.example.com

SOA records for somehost         dig soa somehost.example.com        nslookup -q=soa somehost.example.com

PTR records for a.b.c.d         dig soa somehost.example.com        nslookup -q=soa somehost.example.com

Any records for somehost         dig any somehost.example.com         nslookup -q=any somehost.example.com

Same from server2         dig @server2 any somehost.example.com         nslookup -q=any somehost.example.com

 

WPA and WPA2 quick connect

SHORT WAY:

wpa_supplicant -B -i [int] -c <(wpa_passphrase [essid] [passphrase])

LONG WAY:

wpa_passphrase [ssidname] [ssidpassword] > /etc/wpa_supplicatnt/wpa_supplicant.conf

TO CONNECT:

wpa_supplicant -B -i [int] -c /etc/wpa_supplicant/wpa_supplicant.conf

 

Capture Network Traffic At Box1 And Send to Box2 (Current Shell is in Box1)

Box1: localhost, Box2: forge.remotehost.com, Interface name is ppp1 but typical and common is eth0

NOT COMPRESSED AT DESTINATION

tcpdump -i ppp1 -w –  |  ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 “cat – > /tmp/result.pcap.gz”

COMPRESS TO GZIP AT DESTINATION

tcpdump -i ppp1 -w –  |  ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 “cat – | gzip > /tmp/result.pcap.gz”

NOTE: selected fastest(crappiest) encryption with “-c arcfour,blowfish-cbc” and compress with “-C” so that ssh and gzip can keep up with capture. In reality it probably will over buffer.

YOU CANNOT AUTOMATE THESE tcpdump WITH & UNLESS YOU USE SSH KEYS (INSTEAD OF PASSWORDS)

NOTE: After you ungzip, You might need to strip top line as its a header (containig the word gzip or some other garbage)

gzip -d ppp3-to-danny.pcap.gz

tail -n +2 /tmp/ppp3-to-danny.pcap.gz > /tmp/ppp3-to-danny1.pcap.gz

 

PUT THIS IN START SCRIPT LIKE .bash_profile or .bashrc TO GET BEST HISTORY

shopt -s histappend

HISTFILESIZE=1000000

HISTSIZE=1000000

HISTCONTROL=ignoredups

HISTTIMEFORMAT=’%F %T ‘

shopt -s cmdhist

PROMPT_COMMAND=’history -a’

 

REMOVE ALL HISTORY

unset PROMPT_COMMAND

rm -f $HISTFILE

unset HISTFILE

 

LOOKING AT EVERY DRIVES (SD or HD) STATS

apt-get install smartmontools

for i in /dev/[sh]d[abcdefghijklmnopqrstuvwxyz]; do echo “===DRIVE: $i===”; smartctl -a $i | egrep -i “serial|model|capacity|reallocated_sec|ata error|power_on”; done;

NOTE: drives keep stats automatically, can do tests while drives running, can also slow down drive to do test, can also stop drive to do other tests – all in man page of smartctl

 

SIMPLE WHILE LOOP

while true; do COMMANDS; done

while true; do cat /proc/mdstat; usleep 1000000; done

while true; do date; cat /proc/mdstat; sleep 10; done

NOTE ON UNITS: Usleep in microseconds (1 microseconds million is 1 second), Sleep in seconds

With usleep – since in microseconds – “usleep 1000000” same as “sleep 1”

microsecond can be written as us or μs (The correct greek format)

 

QUICK RESTARTABLE RSYNC SCRIPT:

#!/bin/bash

# If rsync fails its exit code is not 0 so it restarts back at the loop

# If exit code is 0 then rsync will stop the script

while [ 1 ]

do

killall rsync

rsync -av –progress –stats –human-readable /c /mnt/dest/nasbackup/

if [ $? = 0 ] ; then

echo

echo “#########################”

echo

echo “RSYNC SUCCESSFULL”

exit

else

echo

echo “#########################”

echo

echo “RSYNC FAILED RESTARTING IN 180 SECONDS”

echo

sleep 180

fi

done

 

CAT PV and SSH to Transfer Files:

NOTE: Imagine 50505 is the SSH port instead of the regular 22 (just showing it incase you use another port) – Big P for SCP, little p for SSH

NOTE: If you use SSH you need to specify the filename as it will save on destination, with SCP thats optional. With SCP you can just tell it the folder to dump to.

scp -P 50505 source.txt username@destinationserver:/filedst/

OR CAN RENAME AS YOU SAVE: scp -p 50505 source.txt username@destinationserver:/filedst/dest.txt

cat file | ssh -p SSHPORT username@destinationserver “cat – > /filedst/file”

 

 

TAR PV and SSH to Transfer folders

* WITH COMPRESSION/DECOMPRESSION:

VIA SSH WITHOUT PV: # cd /srcfolder; tar -czf – . | ssh -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -czf – . | pv -s `du -sb . | awk ‘{print $1}’` | ssh -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -czf – . | pv -s `du -sb . | awk ‘{print $1}’` | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

NOTE: Can implement -C on the ssh however, there will be no benefit logically speaking since we already did all the possible compressions at the tar level

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

* WITHOUT COMPRESSIONS/DECOMPRESSION:

VIA SSH WITHOUT PV: # cd /srcfolder;  tar -cf – . | ssh -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -cf – . | pv -s `du -sb . | awk ‘{print $1}’` | ssh -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -cf – . | pv -s `du -sb . | awk ‘{print $1}’` | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

* WITHOUT COMPRESSIONS/DECOMPRESSION FROM SSH:

VIA SSH WITHOUT PV: # cd /srcfolder;  tar -cf – . | ssh -C -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -cf – . | pv -s `du -sb . | awk ‘{print $1}’` | ssh -C -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -cf – . | pv -s `du -sb . | awk ‘{print $1}’` | ssh -C -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

 

CHROOTING – The mounts before

FIRST MOUNT THE newroot:

mount /dev/sda1 /newroot

HERE ARE THE SYSTEM MOUNTS:

mount -t proc none /newroot/proc

mount -o bind /dev /newroot/dev

mount -t devpts none /newroot/dev/pts

mount -t sysfs none /newroot/sys

SOMETIMES YOU WILL WANT /run:

mkdir /run

mount -t tmpfs tmpfs /run

mount –bind /run /newroot/run

CHROOT TIME (and run bash shell /bin/bash instead of the default oldshell /bin/sh):

chroot /newroot /bin/bash

WHEN YOU EXIT OUT OF CHROOT

umount /newroot/run

umount /newroot/sys

umount /newroot/dev/pts

umount /newroot/dev

umount /newroot/proc

umount /newroot

 

BTRFS MOUNTS ANALYSIS QUICK SCRIPT

Just copy paste it and run it or make it into a script:

#!/bin/bash

echo “=================”

echo mounts

echo “=================”

echo

mount

echo

echo “========================================”

echo “The following BTRFS volumes are mounted”

echo “========================================”

btrfs filesystem show –all-devices

echo

echo “===OR SIMPLY:===”

btrfs filesystem show –all-devices | egrep “/dev/” | awk ‘{print \$8}’

echo

for i in \`btrfs filesystem show –all-devices | egrep “/dev/” | awk ‘{print \$8}’\`

do

echo “—\$i is mounted—“

echo df | egrep \$i

df | egrep \$i

echo

done

echo “==========================”

echo “btrfs filesystem df <path>”

echo “==========================”

echo

# THIS PART IS AWESOME 1 START

for i in \`btrfs filesystem show –all-devices | egrep “/dev/” | awk ‘{print \$8}’\`

do

echo “===FILESYSTEM DFs FOR \$i===”

echo

df | egrep \$i

for z in \`df | egrep \$i | awk ‘{print \$6}’\`

do

echo

echo “—btrfs filesystem df \$z–“

echo

btrfs filesystem df \$z

echo

done

done

# THIS PART IS AWESOME 1 END

echo “==============================”

echo “btrfs subvolume list -a <path>”

echo “==============================”

# THIS PART IS AWESOME 2 START

for i in \`btrfs filesystem show –all-devices | egrep “/dev/” | awk ‘{print \$8}’\`

do

echo “===SUBVOLUMES FOR \$i===”

echo

df | egrep \$i

for z in \`df | egrep \$i | awk ‘{print \$6}’\`

do

echo

echo “—btrfs subvolume list -a \$z–“

echo

btrfs subvolume list -a \$z

echo

done

done

# THIS PART IS AWESOME 2 END

echo “=================”

 

TO SELECT FILES WITHIN DATE RANGE:

* TO SELECT A RANGE:

touch –date “2007-01-01” /tmp/start

touch –date “2008-01-01” /tmp/end

find /data/images -type f -newer /tmp/start -not -newer /tmp/end

 

SUM UP DATA SIZE BY DAY:

* FOR CURRENT FOLDER:

find . -type f -print0 | xargs -0 ls -l –time-style=long-iso | awk ‘{sum[$6]+= $5}END{for (s in sum){print sum[s],s;}}’ | sort -k2 | column -t

* FOR CURRENT FOLDER BUT A CERTAIN DATE RANGE – established above – AND NOT INCLUDING A CERTAIN FILE:

find . -type f -newer tmpstart -not -newer tmpend -not -name “Folder.cfg” -print0 | xargs -0 ls -l –time-style=long-iso | awk ‘{sum[$6]+= $5}END{for (s in sum){print sum[s],s;}}’ | sort -k2 | column -t

 

SUM UP DATA THAT IS SELECTED:

* FOR CURRENT FOLDER:

find . -type f -ls | awk ‘{total += $7} END {print total}’

* FOR CURRENT FOLDER BUT A CERTAIN DATE RANGE – established above – AND NOT INCLUDING A CERTAIN FILE:

find . -type f -newer tmpstart -not -newer tmpend -not -name “Folder.cfg” -ls | awk ‘{total += $7} END {print total}’

 

AWESOME SCRIPT TO COUNT UP FILES BY EXTENSION:

find . -type f 2>/dev/null | sed ‘s|^\./\([^/]*\)/|\1/|; s|/.*/|/|; s|/.*.\.| |p; d’ | sort | uniq -ic

BIG VERSION:

find . -type f 2>/dev/null \

    | sed ‘s|^\./\([^/]*\)/|\1/|; s|/.*/|/|; s|/.*.\.| |p; d’ \

    | sort | uniq -ic \

    | sort -b -k2,2 -k1,1rn \

    | awk ‘

BEGIN{

    sep = “+——-+——+——–+”

    print sep “\n| count | ext  | folder |\n” sep

}

{ printf(“| %5d | %-4s | %-6s |\n”, $1, $3, $2) }

END{ print sep }’

 

DELETING EVERYTHING IN CERTAIN FOLDER

FIRST MAKE SURE YOUR IN THE RIGHT FOLDER: cd /folder_which_will_have_everything_in_it_deleted

Deleting with the following command:

# rm -rf *

This might fail if you have to many files in the folder

It will say “fail too many arguments” or something like that

Here is an option to delete all the files

# find . -type f -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

For every file it lists it, deletes it, and tells you DELETED after

To delete everything not just files

# find . -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

Or maybe do it like this files first and then directories and everything else

# find . -type f -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

# find . -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

 

MORE INFO ON SCSI DEVICES

apt-get install lsscsi

lsscsi -sgdlp

 

MDADM RAID DEFAULT SPEED LIMIT MAX AND MIN (in case you changed them)

echo 200000 > /proc/sys/dev/raid/speed_limit_max

echo 1000 > /proc/sys/dev/raid/speed_limit_min

 

ZFS – Checking Arc Stats

Make sure you have the following package: sunwmdb package, which will enable dynamic reading of ARC statistics:

If you have solaris with debian:

apt-get update

apt-get install sunwmdb

To check Arc:

echo “::arc” | mdb –k

 

ZFS – To Set Arc Meta Limit to bigger value:

Need to have mdb (from package sunwmdb)

8 gig: 0x200000000 = 8 GB exactly (8 gibibytes according to wolfram and 8.59 gigabytes, 8 gigabytes according to google)

echo arc_meta_limit/Z 0x200000000 | mdb -kw

9 gig: 0x240000000 = 9 GB exactly (9 gibibytes according to wolfram and 9.664 gigabytes, 9 gigabytes according to google) (9 gibibytes and 9.664 gigabytes accoring to wolfram

echo arc_meta_limit/Z 0x240000000 | mdb -kw

10 gig: 0x280000000 = 10 GB exactly (10 gibibytes according to wolfram and 10.74 gigabytes, 10 gigabytes according to google)

0x280000000 = 10 GB exactly (10 gibibytes according to wolfram and 10.74 gigabytes, 10 gigabytes according to google)

echo arc_meta_limit/Z 0x280000000 | mdb -kw

13.5 gig: 0x360000000 = 13.5 GB exactly (14.5 gibibytes according to wolfram and 14.5 gigabytes, 13.5 gigabytes according to google)

echo arc_meta_limit/Z 0x360000000 | mdb -kw

 

Tar All Logs and Send to FTP Server

TAR ALL LOGS INTO A FILE IN TMP THAT WILL HAVE DATE:

tar -zcvf /tmp/all-logs-`date +%F-%T | tr “:” “-“` /etc /var/log

FTP SYNTAX: NOTE: can use other methods to transfer not just ftp (rsync, pv, cat, tar, scp, ssh, gzip)

ncftpput -u username ftpiporhostname remotelocation localfile

Remotelocation has to be a folder location (that exists, well / exists always and thats where I will dump)

EXAMPLE:

ncftpput -u bhbmaster ftp.drivehq.com / both.tar.gz

ncftpput -u bhbmaster 66.220.9.50 / both.tar.gz

 

COPYING PARTITION TABLES BETWEEN DRIVES (sfdisk for MBR and sgdisk for GPT)

MBR – sfdisk

To backup an MBR partition table using ‘sfdisk’: # sfdisk -d /dev/sda > sda.table

​To restore an MBR partition table from backup using ‘sfdisk’: # sfdisk /dev/sda < sda.table

Clone with the backup file: # sfdisk /dev/sdb < sda.table

Clone partition from SDA to SDB (copy from SDA to SDB): # sfdisk -d /dev/sda | sfdisk /dev/sdb

Confirm by listing(printing) partition table of source: # sfdisk -l /dev/sda

Confirm by listing(printing) partition table of destination: # sfdisk -l /dev/sda

NOTE: source and destination partition tables should match after clone (obviously)

NOTE: sfdisk -d is for dump, -l is for list

GPT – sgdisk

To backup a GPT partition table using ‘sgdisk’: # sgdisk -b sdX.gpt /dev/sdX

​To restore a GPT partition table from a backup file using ‘sfdisk’: # sgdisk -l sdX.gpt /dev/sdX

To clone a partition table from one drive to another using ‘sgdisk’:# sgdisk -R=Destination Source

NOTE: note the syntax is destination is first (not source) unlike the common other way where source is first. So keep that in mind and dont mess up the commnd

NOTE: sometimes that command doesnt go through so try with and without =, and consider the space (sometimes its best not to include it)

Other likeable forms:

# sgdisk -R=/dev/sdb /dev/sda

# sgdisk -R/dev/sdb /dev/sda

​After cloning GPT tables you will need to randomize the GUID of the destination:​

# sgdisk -G /dev/sdb

Confirm by listing(printing) partition table of source: # sgdisk -p /dev/sda

Confirm by listing(printing) partition table of destination: # sgdisk -p /dev/sdb

NOTE: -R is for replicate (also known as copy or clone), -G is for GUID or randomizing GUID, -p is for print

Leave a Reply

Your email address will not be published. Required fields are marked *