14Jan/14

LINUX CHEAT SHEET OF DOOM

Open STD IN in VIM

command | vim –

Or:

vim <(command)

Examples:

echo “stuff” | vim –

vim <(df)

It says open file is stdin (instead of filename)

NOTE: can use -R for read only: commmand | vim -R –

OR: vim -R <(command)

When you save it, you get to pick a file name

:w /some/save/location/text.txt

 

VIM: Save file by appending _BACKUP (or anyword) to current name

:w %:p_BACKUP

Note: in vim %:p means current filename (and path)

To help remember: think ‘p’ like ‘path’ and ‘%’ means ‘full’ in vim, so %:p full path

 

VIM Tricks

gg  Go to Top

G Go to Bottom (Shift-g)

dgg to delete to the top

dG to delete to the bottom

 

Remove Empty Lines

command | egrep -v “^$”

 

LINUX EXIT CODES:

Exit status 0 is good (The command completed no problem, AKA SUCCESS), anything else is an error (need to look up error in man page – the errors are like bit wise added/ORed)

command

echo “COMMANDS exit code is: $?”

Or can combine

command && echo “command worked” || echo “command didnt work exit code was $?”

 

BASH

command1 && command2 – this runs command2 only if command1 is succesfull (useful for success messages or continuing to next part of program)

command1 || command2 – this runs command2 only if command1 is failed (useful for fail messages or exiting if something happens)

command1 && command2 || command3 – this will run command1 and if it fails it will run command3, and if command1 is successful it will run command2. Note that command3 will run if command2 will fail.

Usually command2 and command3 are made such that doesnt happen, because we use “echo” commands for those, and those are always exit status 0 (success)

 

Replace every from-word to to-word

Command-with-output | sed “s/from-word/to-word/g”

echo “Your face is a from-word” | sed ‘s/from-word/to-word/g’

 

To Upper

command | tr ‘[:upper:]’ ‘[:lower:]’

tr ‘[:upper:]’ ‘[:lower:]’ < input.txt > output.txt

y=”THIS IS a TeSt”

echo “${y,,}”

dd if=input.txt of=output.txt conv=lcase

awk ‘{ print tolower($0) }’ input.txt > output.txt

perl -pe ‘$_= lc($_)’ input.txt > output.txt

sed -e ‘s/\(.*\)/\L\1/’ input.txt > output.txt

 

To Lower

command | tr ‘[:lower:]’ ‘[:upper:]’

tr ‘[:lower:]’ ‘[:upper:]’ < input.txt > output.txt

y=”this Is A test”

echo “${y^^}”

dd if=input.txt of=output.txt conv=ucase

awk ‘{ print toupper($0) }’ input.txt > output.txt

perl -pe ‘$_= uc($_)’ input.txt > output.txt

sed -e ‘s/\(.*\)/\U\1/’ input.txt > output.txt

 

LS SORT

ls -lisah (regular ls – easy to remember lisa with h, think simpsons)

ls -lisahSt (easy to remember think lisah street – S for sort and t for time – so sorts on time)

ls -lisahS (easy to remember think lisahs – like many lisas from simpsons – so it will sort because of the S by filesize or weight because so many lisas are around)

NOTE: I know I am rediculus

 

iftop best way to run

iftop -nNBP -i eth0

Then hit Shift-T

hitting T – will show totals column

n – wont resolve hostnames, N -wontresolve port names, B -everything in bytes instead of bits, P – shows the ports

Columns on the right side of iftop output/interface are then:

TOTALS,2 sec, 10 sec, 40 sec speeds

 

PS and top

ps

ps -ef –forest (remember forest is 1 r and 1 s)

ps awwfux  (think ahhh F-word)

ps aux

ps ax

Last 3 are good to see every command

top -c

top -c -n1 -b OR top -cbn1 OR top -cn1b

-c shows all arguments, n1 runs top once so good for shell & pipes, b will not print colors good for piping

Order doesn’t matter and can combine the args, just make sure after n is a 1 (run once), can put 2 but then it will run 2 times so its not good for shell and piping

 

In Depth with Head and Tail

SHOW TOP 5 LINES: head -n 5 OR head -n +5 OR head -5

SHOW BOTTOM 5 LINES: tail -n 5 OR tail -n -5 OR tail -5

OMIT TOP 5 LINES: tail -n +6 (THINK OF IT LIKE START AT SIXTH LINE AND GO TO BOTTOM – EVEN THOUGH IT DOESNT WORK LIKE THAT IN REALITY, THAT THINKING WORKS WITH IT. SO IN 7 LINE DOC YOU WILL ONLY SEE THE 6TH AND 7TH LINE – THE BOTTOM 2 LINES)

OMIT BOTTOM 5 LINES: head -n +5 (THINK OF IT LIKE, I WILL ERASE THE BOTTOM 5 LINES. SO IN 7 LINE DOC YOU WILL ONLY SEE THE TOP 2 LINES)

 

Print Columns

diff -y file1 file2

pr -m -tT file1 file2

pr  -W200 -m -tT -help file1 file2

pr  -W200 -m -tT -help <(df) <(df -h)

Note W and w do the same thing, page width (im sure there is a diff but not that we notice), if using W and w then the last one set (right most) is the one that will be read

 

Asking for file to read, but want commandline output

command <(command1) <(command2)

diff -y <(df -h) <(df)

 

BASH 3 and BASH 4 IO Redirections:

* fd0-stdin: in other word File Descriptor 0 is stdin

* fd1-stdout: in other word File Descriptor 1 is stdout

* fd2-stderr: in other word File Descriptor 2 is stderr

* Makes file 0 length, clears file or makes one (like touch)

: > filename

* Same as above: Makes file 0 length, clears file or makes one (like touch)

> filename   

* Redirect stdout to file “filename”

1>filename

* Redirect and append stdout to file “filename”

1>>filename

* Redirect stderr to file “filename”

2>filename

* Redirect and append stderr to file “filename”

2>>filename

* Redirect both stdout and stderr to file “filename”

&>filename

* This operator is now functional, as of Bash 4, final release.

In BASH3:

cmd > file.txt 2> file.txt

OR:

cmd > file.txt 2>&1

* Redirect and append stderr and stdout to file “filename”

&>>filename

In Bash3 you would have to do:

cmd >>file.txt 2>&1

UNDERSTANDING LAST ONE – COMMENT 1: Redirection statements are evaluated, as always, from left to right. “>>file” means redirect STDOUT to file, with append, short for “1>>file”. “2>&1” means redirect STDERR to “where stdout goes”. NOTE: The interpretion “redirect STDERR to STDOUT” is wrong.

UNDERSTANDING LAST ONE – COMMENT 2: It says “append output (stdout, file descriptor 1) onto file.txt and send stderr (file descriptor 2) to the same place as fd1”

 

Pipes (only redirect STDOUT of one process to STDIN of another – so STDERR goes away)

Redirect STDOUT and STDERR to the pipe (they will both turn into STDIN):

# command 2>&1 | command1

# command &| command1

* So STDERR is not piped thru to next command. Here is an example showing how to combine them (first it shows you the problem how stderr is not piped):

* First we need to generate some output that has text in both stdout (fd1) and stderr (fd2): # { echo “stdout”; echo “stderr” 1>&2; }

This redirects stderr (file descriptor 2) to stdout (file descriptor 1), e.g.:

# { echo “stdout123”; echo “stderr123” 1>&2; } | grep -v std

OUTPUT: stderr123

EXPLANATION: “stdout123” goes to stdout (fd1), “stderr123” goes to stderr (fd2). grep only sees “stdout123” (and we tell grep to not write anything that has “std” in it), hence “stderr123” prints to the terminal.

* To combine stdout (fd1) and stderr (fd2) you would redirect stderr (fd2) to stdout (fd1) using 2>&1.

On the other hand:

# { echo “stdout123”; echo “stderr123” 1>&2; } 2>&1 | grep -v std

OUTPUT is blank (meaning that both stdout123 and stderr123 were piped to grep). stdout123 was in fd1 (stdout) and stderr123 was in fd2 (stderr) then they got combined (by merging/redirecting fd2 to fd1) then piped to grep and a pipe only sends fd1 (stdout) so grep saw both output.

After writing to both stdout and stderr, 2>&1 redirects stderr back to stdout and grep sees both strings on stdin, thus filters out both.

 

Command Substitution

* User output of ls as arguments of echo:

echo $(ls)

OUTPUT: total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email

* Same thing, Why not simply use:

# echo ls -lrt

OUTPUT: total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email

* Or much safer – use output of ls as only the 1st argument of echo (useful with other commands other then echo):

# echo “ls -lrt

OUTPUT:

total 20

-rw-r–r– 1 root root 11153 Jan  4 12:07 everyl3

-rw-r–r– 1 root root  1667 Jan  4 12:33 everyl3org

-rw-r–r– 1 root root   781 Jan  4 12:37 everyl3email

* Or much safer – quotes are printed around output:

# echo \”ls -lrt\”

OUTPUT: “total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email”

 

FIND EXEC – run command on a found file

You can use XARGS, or you can use finds own -exec argument. The syntax is:

find [find options] -exec command1 {} \;

find . -type f -exec echo “THIS FILE: {}” \;

find . -type f -exec ls -i {} \;

Everywhere you see a {} will be where the filename goes. This command1 is ran once per found file.

Every exec statement ends with a \;. It tells where the command ends. Since ; ends full commands with bash, we need to tell it to escape ; with \;, thus -exec knows that the exec command is done (and not the whole find command as a whole is done)

You can use whatever find options you need, put the exec at the end.

Dont combine exec with xargs (the output of the exec command will be used as the arguments of the xargs, which we dont want – command outputs could be huge – and we just want the filename in xargs). If you need to use the output of one command on another then just use 2 xargs (or if you want pain you could use find . -exec comm1 {} \; -print0 | xargs -0 -I {} nextcommand {})

Final note, you dont need to use {}:

find . -type f -exec date \;

That will list the date as many times as it finds a file.

 

MULTIPLE FIND EXECS – run multiple commands on a found file

find [find options] -exec command1 {} \; -exec command2 {} \;

THIS WILL LIST THE FILE NAME AND TIME IT FOUND IT AT:

find [find options] -exec echo “FOUND THIS {} FILE AT:” \; -exec date \;

THIS WILL LIST FILE AND THEN DELETE IT (there is better option then this with xargs that will show result of delete, this is in the xargs section:

find [find options] -exec echo “DELETEING: {}” \; -exec rm -rf {} \;

This doesnt use the output of command1 into command2. This, as it looks, runs 2 commands. Since we cant tie commands like this “-exec command1 && command2 \;”

 

XARGS

NOTE: this shows that you should test every command (use echos instead of rms or cps or mvs) before running the command that will touch data

Use output of one command as the arguments of the next command (next command after the pipe)

SIMPLE EXAMPLE:

# command1

OUTPUT:

thing1

thing2

thing3

thing4

THEN RUNNING:

# command1 | xargs command2

Is the equivalent of running these:

command2 thing1

command2 thing2

command2 thing3

command2 thing4

Xargs will run the listed command with the given arguments (the output of the previous command thats piped to xargs) as many times as the first command has lines. Each line becomes an argument for xargs.

XARGS is used with find alot, find lists file names one by one, and then xargs can run commands on them one by one (more efficient the bulk operations – more later)

IF USING WITH FIND USE -print0 ON FIND and -0 ON XARGS – TREATS SPACES & SPECIAL CHARS – OR ELSE ERRORS

“-0” If there are blank spaces or characters (including newlines) many commands will not work. This option take cares of file names with blank space.

“-print0” in find does the same thing -0 does in xargs

PROBLEM and ERROR IF YOU RUN LIKE THIS: find -type f -iname “*core*” | xargs ls -lisah

NO PROBLEMS:

find -type f -name “*core*” -print0 | xargs -0 ls -lisah

 

XARGS PUTS ARGUMENT AT THE END – WHAT IF NEED IT ELSE WHERE or REUSE IT

-I occurances of input arguments go where you tell them. First specify what the replace-char is, it can be more then 1 char, like here its {}. So we specified that -I {}, so everywhere you see {} thats where the command arguments go instead of the end.

EXAMPLE1:

# find . -print0 | xargs -0 -I {} echo THIS FILE {} IS EQAUL TO THIS FILE {}

EXAMPLE2:

# find . -name “*.bak” -print0 | xargs -0 -I {} mv {} ~/old.files

{} as the argument list marker

 

NOTE SOMETIMES ITS BETTER TO USE -I {} THEN TO LET THE ARGUMENT FALL AT THE END

DOESNT WORK – BAD:

# find . -type f -print0 | xargs -0 echo “WOW”

WOW ./everyl3 ./everyl3org ./everyl3email

WORKS – GOOD:

# find . -type f -print0 | xargs -0 -I {} echo “WOW {}”

WOW ./everyl3

WOW ./everyl3org

WOW ./everyl3email

# find . -type f -print0 | xargs -0 -I {} echo “WOW” {}

WOW ./everyl3

WOW ./everyl3org

WOW ./everyl3email

 

XARGS multiple commands:

cat a.txt | xargs -I % sh -c ‘command1; command2;’

NOTE: Use as many commands as you want, we needed to call upon /bin/sh or /bin/bash would work too to use multiple commands -c argument in sh allows you to enter commands on the shell

NOTE: You can use % for the arguments in the commands, everywhere there is % thats where a.txt output will act as arguments. Its the same as the -I {} in previous examples, -I %. So the effect {} had is the same as %.

NOTE: This is a useless use of cat, xargs can take arguments from STDIN (you can also put STDIN in the front – or the back of a command it doesnt matter):

< a.txt xargs -I % sh -c ‘command1; command2; …’

NOTE: AGAIN Yes, the redirection can be at the beginning of the command.

 

 

XARGS BULK OPERATION: Remember I said its more efficient on certain bulk operations

XARGS FIXING OVERLOADING EXAMPLE1: Avoiding errors and resource hungry problems with xargs and find combo on a mass delete

Imagine your deleting millions of files big and small that are all sitting in the directory /sucks/

NOTE: can use any of the methods with the *, my favorite are the bottom to (they delete one by one so no memory or overloading issues) and they list deleted files (and if its failed or successful when delete finished)

NOTE: The none xargs methods are tabbed

* Well you could delete em like this:

# rm -rf /sucks/

* However the system could freeze and you wont know when you delete a file, an alternative is to use xargs with find (or just find by itself)

# find /sucks –delete

# find /sucks -exec rm -rf {} \;

* Have it tell you what file is deleting:

# find /sucks -exec echo “DELETING: {}” \; -exec rm -rf {} \;

* OR first files then empty dirs & everything else:

# find /sucks -type f -exec rm -rf {} \;

# find /sucks -exec rm -rf {} \;

* Or do the same thing and tell you what file its deleteing:

# find /sucks  -type f -exec echo “DELETING FILE: {}” \; -exec rm -rf {} \;

# find /sucks -exec echo “DELETING: {}” \; -exec rm -rf {} \;

* Or with XARGS:

# find /sucks -print0 | xargs -0 rm -rf

* Or with Xargs list and delete file:

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {}’

* Or with XARGS files first, then everything else:

# find /sucks -type f -print0 | xargs -0 rm -rf

# find /sucks -print0 | xargs -0 rm -rf

* Delete with XARGS and list file before delete and result of delete – using multiple Xargs:

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

* Delete with XARGS and list file before delete and result of delete – using multiple Xargs – first delete files and then everything else:

find /sucks -type f -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING FILE {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

XARGS FIXING OVERLOADING EXAMPLE2: Avoiding errors and resource hungry problems with xargs and find combo on a mass copy

To copy all media files to another location called /bakup/iscsi, you can use cp as follows:

# cp -r -v -p /share/media/mp3/ /backup/iscsi/mp3

However, cp command may fail if an error occurs such as if the number of files is too large for the cp command to handle. xargs in combination with find can handle such operation nicely. xargs is more resource efficient and will not halt with an error:

# find /share/media/mp3/ -type f -name “*.mp3” -print0 | xargs -0 -r -I file cp -v -p file –target-directory=/bakup/iscsi/mp3

 

Duplicates

Show Duplicates (output will be sorted):

command | sort | uniq -d

Remove Duplicates (output will be sorted):

command | sort | uniq

Remove Duplicates (not sorted – order preserved):

command | awk ‘!x[$0]++’

This command is simply telling awk which lines to print. The variable $0 holds the entire contents of a line and square brackets are array access. So, for each line of the file, the node of the array named x is incremented and the line printed if the content of that node was not (!) previously set.

 

While adding drives watch dmesg and cat /proc/partitions

If you have mdev and no watch command (those go hand in hand :.):

while true; do echo “—–“date“—-“; mdev -s; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have mdev and you do have watch:

while true; do echo “—–“date“—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have the better udev (basically its an automatic mdev that doesn’t need to be called) and you have watch:

while true; do echo “—–“date“—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have udev and no watch:

while true; do echo “—–“date“—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

 

How to Pipe STDERR and not STDOUT

Note typically its STDOUT which gets piped.

So we tell STDOUT to go to hell. And Tell STDERR to go to where STDOUT usually goes

command1 2>&1 > /dev/null | command2

Now command2 will receive command1 stderr

Normal operations are like this:

command1 | command2

Here command2 recieves only command1s stdout (not stderr)

 

Bash Time Diff Script

date11=$(date +”%s”)

<Do something>

date22=$(date +”%s”)

diff11=$(($date2-$date1))

echo “$(($diff11 / 60)) minutes and $(($diff11 % 60)) seconds elapsed.”

 

 

ifstat

Interface bandwidth continous scrolling output

(STEP 0) INSTALL

apt-get install ifstat

(STEP 1) COMMAND LINE

ifstat -bat

-b bytes instead of bits

-a every interface

-t time stamps

 

BYOBU (some will work in screen)

CONTROL-a %  (in other words CONTROL-a SHIFT-5) – splits current region vertically and makes a new bash session in it and puts you into the new region

CONTROL-a |  (in other words CONTROL-a SHIFT-\ ) – splits horizonatlly and makes a new bash session in it and puts you into the new region (in old version this splits | )

CONTROL-a S  (in old version this splits —————)

CONTROL-a TAB  move to next region

CONTROL-a “  select session with arrow keys and enter goes there

CONTROL-a X   closes sessions

CONTROL-a c  creates a new session (new tab) (note a tab can have multiple sessions split in either way)

CONTROL-a A   thats a capital a (in other words CONTROL-a SHIFT-a) name a tab

CONTROL-a ?  to see help, ENTER TO QUIT out of help

CONTROL-a {  (control -a shift-[) moves current window to another window

CONTROL-a [  copy mode move around with keys scroll essentially this way <– – ENTER without CONTROL a to get out of copy mode — need to repeat full key command to do again

CONTROL-a ]  copy mode move around with keys scroll essentially this way –>- ENTER without CONTROL a to get out of copy mode — need to repeat full key command to do again

 

SCREEN (some will work in BYOBU)

CONTROL-A then c – create new shell

CONTROL-A then S – split window with horizontal line —

CONTROL-A then | – split window with vertical line |

CONTROL-A then TAB – move over to next window

CONTROL-A then ” – see all windows

CONTROL-A then [ – copy mode (up down left right, page up page down move around in window)

CONTROL-A then X – close window

CONTROL-A then D – detach (processes still are running)

 

DNS with dig and nslookup

Find this                              with DIG                                               with NSLookup

Generic Syntax                    dig @server type name                          nslookup -q=type name server

A records for somehost         dig somehost.example.com                 nslookup somehost.example.com

MX records for somehost         dig mx somehost.example.com         nslookup -q=mx somehost.example.com

SOA records for somehost         dig soa somehost.example.com        nslookup -q=soa somehost.example.com

PTR records for a.b.c.d         dig soa somehost.example.com        nslookup -q=soa somehost.example.com

Any records for somehost         dig any somehost.example.com         nslookup -q=any somehost.example.com

Same from server2         dig @server2 any somehost.example.com         nslookup -q=any somehost.example.com

 

WPA and WPA2 quick connect

SHORT WAY:

wpa_supplicant -B -i [int] -c <(wpa_passphrase [essid] [passphrase])

LONG WAY:

wpa_passphrase [ssidname] [ssidpassword] > /etc/wpa_supplicatnt/wpa_supplicant.conf

TO CONNECT:

wpa_supplicant -B -i [int] -c /etc/wpa_supplicant/wpa_supplicant.conf

 

Capture Network Traffic At Box1 And Send to Box2 (Current Shell is in Box1)

Box1: localhost, Box2: forge.remotehost.com, Interface name is ppp1 but typical and common is eth0

NOT COMPRESSED AT DESTINATION

tcpdump -i ppp1 -w –  |  ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 “cat – > /tmp/result.pcap.gz”

COMPRESS TO GZIP AT DESTINATION

tcpdump -i ppp1 -w –  |  ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 “cat – | gzip > /tmp/result.pcap.gz”

NOTE: selected fastest(crappiest) encryption with “-c arcfour,blowfish-cbc” and compress with “-C” so that ssh and gzip can keep up with capture. In reality it probably will over buffer.

YOU CANNOT AUTOMATE THESE tcpdump WITH & UNLESS YOU USE SSH KEYS (INSTEAD OF PASSWORDS)

NOTE: After you ungzip, You might need to strip top line as its a header (containig the word gzip or some other garbage)

gzip -d ppp3-to-danny.pcap.gz

tail -n +2 /tmp/ppp3-to-danny.pcap.gz > /tmp/ppp3-to-danny1.pcap.gz

 

PUT THIS IN START SCRIPT LIKE .bash_profile or .bashrc TO GET BEST HISTORY

shopt -s histappend

HISTFILESIZE=1000000

HISTSIZE=1000000

HISTCONTROL=ignoredups

HISTTIMEFORMAT=’%F %T ‘

shopt -s cmdhist

PROMPT_COMMAND=’history -a’

 

REMOVE ALL HISTORY

unset PROMPT_COMMAND

rm -f $HISTFILE

unset HISTFILE

 

LOOKING AT EVERY DRIVES (SD or HD) STATS

apt-get install smartmontools

for i in /dev/[sh]d[abcdefghijklmnopqrstuvwxyz]; do echo “===DRIVE: $i===”; smartctl -a $i | egrep -i “serial|model|capacity|reallocated_sec|ata error|power_on”; done;

NOTE: drives keep stats automatically, can do tests while drives running, can also slow down drive to do test, can also stop drive to do other tests – all in man page of smartctl

 

SIMPLE WHILE LOOP

while true; do COMMANDS; done

while true; do cat /proc/mdstat; usleep 1000000; done

while true; do date; cat /proc/mdstat; sleep 10; done

NOTE ON UNITS: Usleep in microseconds (1 microseconds million is 1 second), Sleep in seconds

With usleep – since in microseconds – “usleep 1000000” same as “sleep 1”

microsecond can be written as us or μs (The correct greek format)

 

QUICK RESTARTABLE RSYNC SCRIPT:

#!/bin/bash

# If rsync fails its exit code is not 0 so it restarts back at the loop

# If exit code is 0 then rsync will stop the script

while [ 1 ]

do

killall rsync

rsync -av –progress –stats –human-readable /c /mnt/dest/nasbackup/

if [ $? = 0 ] ; then

echo

echo “#########################”

echo

echo “RSYNC SUCCESSFULL”

exit

else

echo

echo “#########################”

echo

echo “RSYNC FAILED RESTARTING IN 180 SECONDS”

echo

sleep 180

fi

done

 

CAT PV and SSH to Transfer Files:

NOTE: Imagine 50505 is the SSH port instead of the regular 22 (just showing it incase you use another port) – Big P for SCP, little p for SSH

NOTE: If you use SSH you need to specify the filename as it will save on destination, with SCP thats optional. With SCP you can just tell it the folder to dump to.

scp -P 50505 source.txt username@destinationserver:/filedst/

OR CAN RENAME AS YOU SAVE: scp -p 50505 source.txt username@destinationserver:/filedst/dest.txt

cat file | ssh -p SSHPORT username@destinationserver “cat – > /filedst/file”

 

 

TAR PV and SSH to Transfer folders

* WITH COMPRESSION/DECOMPRESSION:

VIA SSH WITHOUT PV: # cd /srcfolder; tar -czf – . | ssh -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -czf – . | pv -s du -sb . | awk '{print $1}' | ssh -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -czf – . | pv -s du -sb . | awk '{print $1}' | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

NOTE: Can implement -C on the ssh however, there will be no benefit logically speaking since we already did all the possible compressions at the tar level

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

* WITHOUT COMPRESSIONS/DECOMPRESSION:

VIA SSH WITHOUT PV: # cd /srcfolder;  tar -cf – . | ssh -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

* WITHOUT COMPRESSIONS/DECOMPRESSION FROM SSH:

VIA SSH WITHOUT PV: # cd /srcfolder;  tar -cf – . | ssh -C -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -C -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -C -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

 

CHROOTING – The mounts before

FIRST MOUNT THE newroot:

mount /dev/sda1 /newroot

HERE ARE THE SYSTEM MOUNTS:

mount -t proc none /newroot/proc

mount -o bind /dev /newroot/dev

mount -t devpts none /newroot/dev/pts

mount -t sysfs none /newroot/sys

SOMETIMES YOU WILL WANT /run:

mkdir /run

mount -t tmpfs tmpfs /run

mount –bind /run /newroot/run

CHROOT TIME (and run bash shell /bin/bash instead of the default oldshell /bin/sh):

chroot /newroot /bin/bash

WHEN YOU EXIT OUT OF CHROOT

umount /newroot/run

umount /newroot/sys

umount /newroot/dev/pts

umount /newroot/dev

umount /newroot/proc

umount /newroot

 

BTRFS MOUNTS ANALYSIS QUICK SCRIPT

Just copy paste it and run it or make it into a script:

#!/bin/bash

echo “=================”

echo mounts

echo “=================”

echo

mount

echo

echo “========================================”

echo “The following BTRFS volumes are mounted”

echo “========================================”

btrfs filesystem show –all-devices

echo

echo “===OR SIMPLY:===”

btrfs filesystem show –all-devices | egrep “/dev/” | awk ‘{print \$8}’

echo

for i in btrfs filesystem show --all-devices | egrep "/dev/" | awk '{print \$8}'

do

echo “—\$i is mounted—“

echo df | egrep \$i

df | egrep \$i

echo

done

echo “==========================”

echo “btrfs filesystem df <path>”

echo “==========================”

echo

# THIS PART IS AWESOME 1 START

for i in btrfs filesystem show --all-devices | egrep "/dev/" | awk '{print \$8}'

do

echo “===FILESYSTEM DFs FOR \$i===”

echo

df | egrep \$i

for z in df | egrep \$i | awk '{print \$6}'

do

echo

echo “—btrfs filesystem df \$z–“

echo

btrfs filesystem df \$z

echo

done

done

# THIS PART IS AWESOME 1 END

echo “==============================”

echo “btrfs subvolume list -a <path>”

echo “==============================”

# THIS PART IS AWESOME 2 START

for i in btrfs filesystem show --all-devices | egrep "/dev/" | awk '{print \$8}'

do

echo “===SUBVOLUMES FOR \$i===”

echo

df | egrep \$i

for z in df | egrep \$i | awk '{print \$6}'

do

echo

echo “—btrfs subvolume list -a \$z–“

echo

btrfs subvolume list -a \$z

echo

done

done

# THIS PART IS AWESOME 2 END

echo “=================”

 

TO SELECT FILES WITHIN DATE RANGE:

* TO SELECT A RANGE:

touch –date “2007-01-01” /tmp/start

touch –date “2008-01-01” /tmp/end

find /data/images -type f -newer /tmp/start -not -newer /tmp/end

 

SUM UP DATA SIZE BY DAY:

* FOR CURRENT FOLDER:

find . -type f -print0 | xargs -0 ls -l –time-style=long-iso | awk ‘{sum[$6]+= $5}END{for (s in sum){print sum[s],s;}}’ | sort -k2 | column -t

* FOR CURRENT FOLDER BUT A CERTAIN DATE RANGE – established above – AND NOT INCLUDING A CERTAIN FILE:

find . -type f -newer tmpstart -not -newer tmpend -not -name “Folder.cfg” -print0 | xargs -0 ls -l –time-style=long-iso | awk ‘{sum[$6]+= $5}END{for (s in sum){print sum[s],s;}}’ | sort -k2 | column -t

 

SUM UP DATA THAT IS SELECTED:

* FOR CURRENT FOLDER:

find . -type f -ls | awk ‘{total += $7} END {print total}’

* FOR CURRENT FOLDER BUT A CERTAIN DATE RANGE – established above – AND NOT INCLUDING A CERTAIN FILE:

find . -type f -newer tmpstart -not -newer tmpend -not -name “Folder.cfg” -ls | awk ‘{total += $7} END {print total}’

 

AWESOME SCRIPT TO COUNT UP FILES BY EXTENSION:

find . -type f 2>/dev/null | sed ‘s|^\./\([^/]*\)/|\1/|; s|/.*/|/|; s|/.*.\.| |p; d’ | sort | uniq -ic

BIG VERSION:

find . -type f 2>/dev/null \

    | sed ‘s|^\./\([^/]*\)/|\1/|; s|/.*/|/|; s|/.*.\.| |p; d’ \

    | sort | uniq -ic \

    | sort -b -k2,2 -k1,1rn \

    | awk ‘

BEGIN{

    sep = “+——-+——+——–+”

    print sep “\n| count | ext  | folder |\n” sep

}

{ printf(“| %5d | %-4s | %-6s |\n”, $1, $3, $2) }

END{ print sep }’

 

DELETING EVERYTHING IN CERTAIN FOLDER

FIRST MAKE SURE YOUR IN THE RIGHT FOLDER: cd /folder_which_will_have_everything_in_it_deleted

Deleting with the following command:

# rm -rf *

This might fail if you have to many files in the folder

It will say “fail too many arguments” or something like that

Here is an option to delete all the files

# find . -type f -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

For every file it lists it, deletes it, and tells you DELETED after

To delete everything not just files

# find . -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

Or maybe do it like this files first and then directories and everything else

# find . -type f -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

# find . -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

 

MORE INFO ON SCSI DEVICES

apt-get install lsscsi

lsscsi -sgdlp

 

MDADM RAID DEFAULT SPEED LIMIT MAX AND MIN (in case you changed them)

echo 200000 > /proc/sys/dev/raid/speed_limit_max

echo 1000 > /proc/sys/dev/raid/speed_limit_min

 

ZFS – Checking Arc Stats

Make sure you have the following package: sunwmdb package, which will enable dynamic reading of ARC statistics:

If you have solaris with debian:

apt-get update

apt-get install sunwmdb

To check Arc:

echo “::arc” | mdb –k

 

ZFS – To Set Arc Meta Limit to bigger value:

Need to have mdb (from package sunwmdb)

8 gig: 0x200000000 = 8 GB exactly (8 gibibytes according to wolfram and 8.59 gigabytes, 8 gigabytes according to google)

echo arc_meta_limit/Z 0x200000000 | mdb -kw

9 gig: 0x240000000 = 9 GB exactly (9 gibibytes according to wolfram and 9.664 gigabytes, 9 gigabytes according to google) (9 gibibytes and 9.664 gigabytes accoring to wolfram

echo arc_meta_limit/Z 0x240000000 | mdb -kw

10 gig: 0x280000000 = 10 GB exactly (10 gibibytes according to wolfram and 10.74 gigabytes, 10 gigabytes according to google)

0x280000000 = 10 GB exactly (10 gibibytes according to wolfram and 10.74 gigabytes, 10 gigabytes according to google)

echo arc_meta_limit/Z 0x280000000 | mdb -kw

13.5 gig: 0x360000000 = 13.5 GB exactly (14.5 gibibytes according to wolfram and 14.5 gigabytes, 13.5 gigabytes according to google)

echo arc_meta_limit/Z 0x360000000 | mdb -kw

 

Tar All Logs and Send to FTP Server

TAR ALL LOGS INTO A FILE IN TMP THAT WILL HAVE DATE:

tar -zcvf /tmp/all-logs-date +%F-%T | tr ":" "-" /etc /var/log

FTP SYNTAX: NOTE: can use other methods to transfer not just ftp (rsync, pv, cat, tar, scp, ssh, gzip)

ncftpput -u username ftpiporhostname remotelocation localfile

Remotelocation has to be a folder location (that exists, well / exists always and thats where I will dump)

EXAMPLE:

ncftpput -u bhbmaster ftp.drivehq.com / both.tar.gz

ncftpput -u bhbmaster 66.220.9.50 / both.tar.gz

 

COPYING PARTITION TABLES BETWEEN DRIVES (sfdisk for MBR and sgdisk for GPT)

MBR – sfdisk

To backup an MBR partition table using ‘sfdisk’: # sfdisk -d /dev/sda > sda.table

​To restore an MBR partition table from backup using ‘sfdisk’: # sfdisk /dev/sda < sda.table

Clone with the backup file: # sfdisk /dev/sdb < sda.table

Clone partition from SDA to SDB (copy from SDA to SDB): # sfdisk -d /dev/sda | sfdisk /dev/sdb

Confirm by listing(printing) partition table of source: # sfdisk -l /dev/sda

Confirm by listing(printing) partition table of destination: # sfdisk -l /dev/sda

NOTE: source and destination partition tables should match after clone (obviously)

NOTE: sfdisk -d is for dump, -l is for list

GPT – sgdisk

To backup a GPT partition table using ‘sgdisk’: # sgdisk -b sdX.gpt /dev/sdX

​To restore a GPT partition table from a backup file using ‘sfdisk’: # sgdisk -l sdX.gpt /dev/sdX

To clone a partition table from one drive to another using ‘sgdisk’:# sgdisk -R=Destination Source

NOTE: note the syntax is destination is first (not source) unlike the common other way where source is first. So keep that in mind and dont mess up the commnd

NOTE: sometimes that command doesnt go through so try with and without =, and consider the space (sometimes its best not to include it)

Other likeable forms:

# sgdisk -R=/dev/sdb /dev/sda

# sgdisk -R/dev/sdb /dev/sda

​After cloning GPT tables you will need to randomize the GUID of the destination:​

# sgdisk -G /dev/sdb

Confirm by listing(printing) partition table of source: # sgdisk -p /dev/sda

Confirm by listing(printing) partition table of destination: # sgdisk -p /dev/sdb

NOTE: -R is for replicate (also known as copy or clone), -G is for GUID or randomizing GUID, -p is for print

14Jan/14

GOOD RESOURCES and LINKS

The purpose of this page is to give you some handy resources that I thought were  very useful and still currently do. I go to these sites often, whether im Troubleshooting, programming, or trying to learn something new.
Programming

With so many tutorials out there for programming, its sometimes hard to find the right one for you. Well it took a while but eventually I found the right one for me.

The new boston is great, its a group of people doing tutorials on specific language. They sound young, but dont let that fool you. Each video is just the right amount of time. They are not redudand so everything is covered once and beautifully. I pretty much mass youtube download their whole site and watch the videos.
Networking
Eli the computer guy on youtube, his channel is amazing. He covers the general information. Although redundant he has a good way to make you remember everything he ever said. Props to Eli the computer guy.
Netgear
For product datasheets and manuals and new firmware I prefer to go to support.netgear.com. Hit the Business button and type in the name of your product in the search bar. Also, If you have a home device click Home products and type in the model of your device in the search bar.
There are a lot of great insights here. And if it wasn’t for this site I wouldn’t of learned all that I know with Netgear.
Another great Netgear link is simply www.netgear.com… every product is listed here and it has a accurate and useful product spec page for every device and also related product. So if you have a module and your looking for a good cable that Netgear might make, well then find the module and go to the Related Products tab.
For storage products like the READYNAS and the new READYDATA, go to www.readynas.com
For the intrusion preventing firewalls(UTMs) and switchs(STMs) go to www.prosecute.com
Either way they will all be listed on netgear.com and support.netgear.com
Other Programs I like and you should download
1. First of all go to ninite.com – Not alot of people know about this site but its a mass program “downloader”perfect for bringing your fresh formatted system up with the newest most useful apps out there.
2. I recommend for everyone to use Keepass – Its a password safe. Its great and its safe. Noone will hack it. If you want to  be extra sure make everything accessed with a certificate file or key file.
3. I also recommend the program called Fences – Im a clean and organization OCD freak so everything must be nicely arranged. All my desktop items sit in these light green opaque boxes. The are like panels for icons, if my “fence”/panel was only big enough for 2 icons and I dropped 10 icons in there then it would put a little scroll bar in it, which again sits on the desktop. Everything is beautifully arranged. Lets say I get sick of seeing all the icons, then I can double click on the desktop and all of the sudden everything disappears, not a glitch, just double click again to make everything reappear like osme programming magic or something.
4. Evernote – with so much info out there on the technological internet web and how its reaching out to hand held device(or should I just say reached, we are at the brink of a new amazing era and I love being part of it) — anyways– its just hard to remember everything — our little human brains are getting too small for all this — i bet our kids will micro-evolve to have more brain hard-drive just to sustain all this information in this new age — anyways this program is an on the go always synced and password protected source to your e-memory – now all my memory isnt just held in the electrical impulses that go of in my head but also on the electrical storage interface we call the web.
OneNOTE
Best app to store notes, You can put the notes anywhere. All notes have sections they go in. I just have 1 Notebook I store everything in. That notebook can be shared with my peers. I can set password on sections (that hold pages) so they cant see into them unless they know the password. The passwords actually encrypt the sections. Everything you see on my infotinks also exists on my onenote Cool
Other Informative Websites
w3schools.com — have an excellent guide on HTML, CSS, XML, Javascript, SQL, and much more.
github.com — an excellent version control system
 
Another good app – adds more options to title bar right click
14Jan/14

Good Windows Maintenance Programs

CCLEANER
########
MY ARTICLE ON THIS, URL: http://www.infotinks.com/windows—how-to-clean-it-up—the-real-way
Good Crap Cleaner file (Anaylze then run) and registry (Run, save backup .reg file for safe keeping, then delete all finds)
PC Decrapifier
##############
MY ARTICLE ON THIS, URL: http://www.infotinks.com/windows—how-to-clean-it-up—the-real-way
http://pcdecrapifier.com/
Can uninstall multiple things at once, good for removing alot of adaware at once
Folder Size
###########
* Calculates folder sizes in background and can have them displayed along side explorer
Speccy
######
* Best computer specs goes good with CPU-Z and GPU-Z
Defraggler
##########
A good defrag progam.
I have 3 disks
Analyze to get framentation, then can run Benchmark Drive to get RANDOM READ SPEED (RRS)
PRE DEFRAG
==========
C: ssd, so doesnt recommend defrag as life will be shorter, 18% fragmentation, RRS: 61.01, freespace: 40.1gb (36%)
D: hdd, 30% fragmentation, freespace: 553.1 gb (57%), RRS: 1.96 MB/s, schedule: Tuesday @ 4 AM: command: (C:\Program Files\Defraggler\df64.exe “D:” /ts /user “Kostia” /appPath “C:\Program Files\Defraggler”)
E: hdd, 20% fragmentation, freespace: 943.2 gb (25%), RRS: 1.25 MB/s, schedule: First Wednesday of every Month @ 2AM: command: (C:\Program Files\Defraggler\df64.exe “E:” /ts /user “Kostia” /appPath “C:\Program Files\Defraggler”)
* Note schedules are setup from program and then to view or edit that schedule you must delete it from WINDOWS TASK SCHEDULER and setup a new one from the program
POST DEFRAG
===========
D: (took overnight) Defrag complete, fragmentation 0%, freespace: same 533.1 GB (57%), RRS: 2.74 MB/s, 2nd test: 2.77 MB/s
E: (took 3 days) Defrag complete, fragmentation 0%, freespace: a little less 912 GB (because I put data on it), RRS: 1.81 MB/s
14Jan/14

FREEPBX WITH GOOGLE VOICE – SETUP FREE VOIP

Google Voice and FREEPBX version 3 beta 
======================================
 
NOTE: Firewall settings just need ALL OUTBOUND ALLOWED. it worked for me without messing with my firewall
This is my own guide on how to setup google voice and freepbx with an Xlite softphone which will have an extension and calls will be recorded. All those Programs are Free (Expect VMworkstation, so if you dont have it, just install freepbx on vmware player – although you cant autostart vms at windows logon with vmware player – or just run freepbx on a real computer hardware box).
 
Obivious Side note:
Note where  I put 10.11.12.33, thats just what I chose for my ip for my pbx server, Where I put 1234 thats just what I chose for my extension and where I put koss, well thats my name, change it to match whatever name you want.
 
This is ment more for a guide for me
 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
My network is 10.11.12.x
Instead of 192.168.1.x
So bare with me and use your brain to figure out the numbers for your network.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
#### SECTION A ####
—presetup freepbx (i set it up on a vm but you could just as well set it up on a pc)—-
1) download the freepbx beta 3 version distro
2) install the operating system as a vm (using vm workstation 9), make sure that vm(give the vm at least 8gbn harddrive and 512mb ram) auto loads when windows starts. Make a text file called. runfreepbxatstart.txt and rename it to runfreepbxatstart.bat and put it in your START-STARTUP folder so that this code below runs when windows starts, so put the code below in that file:
     @echo off
     chdir “C:\Program Files\VMware\VMware Workstation”
     vmrun start “C:\Users\koss\Documents\Virtual Machines\FreePBX-beta\FreePBX-beta.vmx”
     taskkill /IM vmware.exe
3) install it with default settings, when asks for network settings, leave it at default dhcp (you will set a root/admin password note it down)
     FREEPBX admin/root: password
4) once loads, log in to the command line screen with the given password, thats your run “ifconfig” to see the ip you will access the freepbx web server with ex: 10.11.12.115
5) with another pc go to browser and access 10.11.12.115, in “PBX Administrator”–(at this step you will be interupted like this parenthesis and prompted to make a main admin user aside from admin/root, remember its username and password)–>”admin”->”system admin”->”Network settings”. Change the settings to be like so if your using a similar network subnet[which i doubt you are]:
IP address: 10.11.12.33
SUBNET MASK: 255.255.255.0
GATEWAY: 10.11.12.1
MAC ADDRESS: Blank (you will notice alot of fields in all of the freepbx/trixbox apps settings pages, keep in mind most of them can be kept blank)
Hit Submit or Apply (notice after you apply the settings, you might need to hit another apply button, I will not remind you everytime you need to click apply in these guides, but where ever I can I will, because that is just rediculus)
 
# SECTION B
—presetup google voice and google chat—-
SIDE NOTE: My case was more complex, I dont use username@gmail.com, i have my own domain username@infotinks.com. However this still works with “google apps” just as it would with regural gmail. Note with my domain(Thru godaddy) I only have the MX record for email set to point to google <- but that is irrelevant to voice, if all else fails, make a gmail account. The only setting that all this has any affect on is when you add your google voice to the motif setting on freepbx. motif is the module that drives google voice. Instead of setting my username as “username@gmail.com” I set it as “username@infotinks.com”. Thats all I wanted to mention in this side note, just incase you have your own domain with google apps and you werent sure how to tweak the settings to work for you, so now you know. Its just the username field in that one setting.
 
1) with you google account log into to gmail and go to settings and go to chat tab and change the following:
Auto-add suggested contacts:    
     Automatically allow people I communicate with often to chat with me and see when I’m online. <– uncheck (this is the default option that we dont want)
     Only allow people that I’ve explicitly approved to chat with me and see when I’m online. <– check
2) go to voice.google.com and log in with your google account. It will want to set a number to verify with, set your cell phone number, it will call your cell and verify with a 2 digit number that will be on the screen. Once verified you can then select a phone number of your liking based either on a keyword (like I picked KOSBOS and it found me a number like 555KOSBOS7) or on location (so if you select utah it will pick 435 or 801 or 385). For the purposes of this tutorial I will use the phone number 5553334444. Then the final thing which I will explain 3 times: (1) go to Google Voice Settings and make sure the Number forwards to CHAT instead of your PHONE.(2) Very important. Make sure your google voice number points to GOOGLE CHAT and not your PHONE NUMBER that you activated google voice with. (3) Again thats set through the voice.google.com and go to settings(gear icon) and in the Phones tab make sure the Forwards calls to setting is set to google chat and not your phone number.
– NOW YOU ARE SET ON THE GOOGLE SIDE OF THINGS
 
#### SECTION C ####
—back to freepbx—
0) Log in to the Freepbx using the browser so ip 10.11.12.33 and CLick PBX Administrator and Log in with the Username and PAssword you set up section A step 4 (the username that the PBX system made you create in the browser and not during setup of the Operating system)
SET UP THE MEAT OF IT (this auto sets up the trunk and all the crazy settings that make it work)
1) Go to “connectivity”->”Google Voice Motif” settings, and add
Google Voice Username: username@infotinks.com (again if you use username@gmail.com then put “username@gmail.com” and not just the username)
Google Voice Password: put your google account password
Google Voice Phone Number: 5553334444 (put the number that you picked with Google Voice, make sure you dont include dashes or the leading 1)
CHECK Edit Trunk
CHECL Outbound Routes
UNCHECK Send Unanswered to GoogleVoice Voicemail
Hit Submit
 
MAKE THE EXTENTION
2) go to “Applications”->”Extensions” Add a Generic SIP Device. Here there will be alot of confusing fields, most can be left blank here is what I set and make sure you remeber this ,as these will be set on the XLITE softphone in the future
 
extension: 1234
display name: KOSS
DEVICE OPTIONS: secret(THIS IS IMPORTANT AS THIS WILL AUTHENTICATE THE PHONES/SOFTPHONES, THIS NEEDS TO HAVE LOWERCASE LETTERS AND NUMBERS): pass1234
CHECK add to isymphony
UNDER ISYMPHONY PROFILE SETTING: create profile CHECK AND profile password OF YOUR CHOOSING IN THIS CASE I SELECTED pass1234 as well, SO AS NOT TO CAUSE CONFUSION
 
CONNECTIVITY – INBOUND ROUTES(note outbound routes are automade when we did the motif step in step 1)
3) “Connectivity”->”Inbound routes” THis will direct where a phone call will go because it can go to an IVR, a Directory, directly to an extension etc. In this case I will send it directly to the extension we made above
Description: route1
Call Recording: Allow
Set Destination: Extensions and select your extension <1234> KOSS
SIDE NOTE:  IF YOU SET THE DESTINATION TO DIRECTORY or PHONE BOOK, and submit you can then test if your setup works so far. At this point with another phone you should be able to call on your google voice number and reach some sort of directory, which of course wont work but you know that your freepbx routes correctly(inbound direction wise) and etc.
 
APPLICATION: for this guide, we are going to record all the conversation to extension 1234
4) “Applications”->”Record KOSS”, hit add and set up the following:
Description “Record1234” Call Recording Mode: ALlow, Destination Extensions <1234> KOSS
 
iSYMPHONY:
5) “admin”->”iSymphony” Admin Username: admin <– leave as default, Admin PAssword: pass1234 <– or whatever you want, something that noone but you has access to. THe rest at default and hit submit, so the only setting I changed was the password.
 
SIDE NOTE:
This whole time the FREEPBX SYSTEM STATUS SCREEN (“reports->freepbx system status”) gives me a warning “SYMLINK from modules failed” (/etc/asterisk/chan_dahdi.conf from dahdiconfig/etc (Already exists, not a link….) but EVERYTHING STILL WORKS. So some errors are okay to ignore.
 
#### SECTION D ####
—back to XLITE—
1) Download the latest XLITE
2) Softphone Menu-> Account Settings:
Account NAme: My PBX <– left at default
Protocol: SIP <- left at default
USER DETAILS:
User ID: 1234 (the extension I used)
DOMAIN: 10.11.12.33 (the ip of the PBX server)
PASSWORd: pass1234 (The device password set at step 2 in SECTION C)
Display Name: KOSS(probably needs to match the step 2 settings)
Authorization name: 1234 (Matches USER ID from above just the extension)
OK
And now you should be able to call out from the softphone and call in
VOILA
 
THere are many more settings to mess around with, check out youtube videos on freepbx (and trixbox – trixbox is a similar to freepbx the interface even looks alike) and etc.
 
### FREEBEE: SECTION E ###
In this Setup Xlite Phone in another Remote Network
Note all of the above before this section was taking place in my home network. The Xlite Phone was on a windows PC and the FreePBX was on a Virtual Machine (also running on the Windows PC but thats irrelevent)
Surprisingly The call quality was Amazing. This requires some port forwards (and port triggers if you want to be safe).
 
–FREEPBX–
Setup Another Extension For WORK Phone Using Instructions from above.  For testing purposes Just change the Inbound Route to point to this extension for now. After you need to Change the Inbound Route to point to an IVR or a Phone Book so you can access both Extensions.
 
–ROUTER AT HOME NETWORK WHERE FREEPBX IS—
Every router is different for this and not all routers support port triggering (this NETGEAR WNDR3400v2 home router does and works great with this)
* Port Forward: Port 5060 UDP –> LAN IP of FREEPBX port 5060 UDP 
* Port Trigger: 5060 UDP –(opens)–> 10000 UDP to 20000 UDP to LAN IP of FREEPBX
(or if you cannot do Port Triggering then Port Forward 10000 UDP to 20000 UDP to the same ports of your FREEPBXs LAN IP)
* No need for Router Setup At Work/Remote Network: Note at Remote Network You just need to have all outbound allowed – Im behind a couple “NAT”s and it still works wonderfully
 
–XLITE ON WINDOWS PC AT WORK–
1) Download the latest XLITE
2) Softphone Menu-> Account Settings:
Account NAme: My PBX <– left at default
Protocol: SIP <- left at default
USER DETAILS:
User ID: 4321 (the extension I used for WORK)
DOMAIN: www.infotinkshouse.com (You can use the Public/WAN Ip of your Router – This uses UDP 5060 to talk to the FREEPBX)
PASSWORD: pass1234 (The device password set at step 2 in SECTION C)
Display Name: Work (probably needs to match the step 2 , extension, settings in SECTION C)
Authorization name: 4321 (Matches USER ID from above just the extension, equal to extension number)