14Jan/14

LINUX & CYGWIN – Memory Dump Across Network via NETCAT or SSH – foremost – volatility

First use the Memory Dump Across network with wmr piped to netcat or wmr piped ssh to get the memory dump. Then analyze the memory dump with foremost or volatility. Foremost for files.
 
MEMORY DUMP ACROSS NETWORK
##########################
 
On Windows: Install Cygwin, and copy WMR (windows memory reader 1.0) memory diagnostic into cygwin\bin folder, also install cygwins netcat and ssh (openssh). I recommend installing apt-cyg and running “
 
http://www.cygwin.com/install.html (note install takes forever even if you dont select alot of packages, its wierd, I only select the following packages: wget,tar,qawk,bzip2,subversion,vim [some of those will not appear in the list thats okay, they are part of the base system so just skip em], then i install apt-cyg, https://code.google.com/p/apt-cyg/,  which allows me to install anything from the cygwin repository using typical “apt-cyg install [program name]”, wmr is not part of the package system, so you need to download it below)
 
http://cybermarshal.com/index.php/cyber-marshal-utilities/windows-memory-reader – extract the file wmr.exe into your cygwins bin directory
 
On Linux: Have an SSH Server and netcat installed (netcat only from the WITH NETCAT section)
 
WITH NETCAT
###########
 
 
ON LINUX
——–
 
Setup a listening server
 
WITH PV:
# nc -l -vvv -p 8888 | pv -s `echo “8 * 1024 * 1024 * 1024” | bc` > CEO-8gb-RAM.dd
 
NO PV:
# nc -l -p 8888 > CEO_8gb-RAM.dd
 
WHERE: -p 8888 is the tcp port that must be forwarded from router to the Linux machine, where 8 * 1024 * 1024 * 1024 is the RAM in bytes, change the 8 to how many gigs of ram you have, CEO-8gb-RAM.dd is the name of the saved file. In 
 
this case it saves file to the current working directory can check with ‘pwd’ command.
 
ON WINDOWS
———-
 
Open cygwin:
# wmr – | nc 10.11.12.21 8888
 
SIMPLEST FORM:
LINUX:# nc -l -p 8888 >  FileToSave.dd
WINDOWS:# wmr – | nc 10.11.12.21 8888
 
 
WITH SSH
########
 
ON WINDOWS
———-
 
WITH PV – progress bar:
First get the size of your ram in bytes, open cmd or cygwin and type “wmic memorychip get capacity” to get the sizes in bytes of all the memory chips…
 
# wmic memorychip get capacity
Capacity
4294967296
8589934592
 
SIZEOFMEM=`echo “4294967296 + 8589934592” | bc`
# wmr – | pv -s $SIZEOFMEM | ssh -p 40004 -C root@savelocation.com “cat – > /forensics/T430-8gb-RAM.dd”
 
Or manually figure out what 4294967296 + 8589934592 is.. in this case its 12884901888 bytes
# wmr – | pv -s 12884901888 | ssh -p 40004 -C root@savelocation.com “cat – > /forensics/T430-8gb-RAM.dd”
 
WITHOUT PV:
# wmr – | ssh -p 40004 -C root@savelocation.com “cat – > /kostia/forensics/T430-8gb-RAM.dd”
 
ALSO CAN DO WITH FASTEST ENCRYPTION:
# wmr – | pv -s 12884901888 | ssh -p 40004 -c arcfour,blowfish-cbc -C root@savelocation.com “cat – > /forensics/T430-8gb-RAM.dd”
 
# wmr – | pv -s $SIZEOFMEM  | ssh -p 40004 -c arcfour,blowfish-cbc -C root@savelocation.com “cat – > /forensics/T430-8gb-RAM1.dd”
 
NOTE: in this case port 40004 tcp on savelocation.com forwards to my Linux box port 22 which is the ssh server
 
SIMPLEST FORM:
WINDOWS: # wmr – | ssh root@savelocation.com “cat – > /tmp/FileToSave.dd”
 
 
ONCE YOU HAVE IMAGE
###################
 
You can use programs like volatility and foremost to extract information
 
To Extract Info From Memory Dump Use:
 
———-
Volatility
———-
 
 
Download the latest tar.gz
And extact it, and run the script like this
 
# python vol.py [commands]
 
Example uses:
 
# python vol.py -f ~/Desktop/win7_trial_64bit.raw imageinfo
 
 
————————–
Foremost – file extraction
————————–
 
# apt-get install foremost
# cd /whereimageisat
# foremost -t all -T -i CEO-8gb-RAM.dd
 
-T makes a folder with output date, -t all tries to get all data
Or can do -t jpg, -i points at file
 
BONUS SCRIPT:
############
 
Make sure all your dumps have a dot anything extension i prefer .dd or .iso
 
Let say  you save all of your memory dump images in one directory called /forensics. And you would like to foremost all the files there at once, given all the files in /forensics are memory dumps, or dds of filesystems…
 
cd /forensics
 
for i in *; do echo “WORKING ON: $i”; FN=`echo $i | cut -d’.’ -f1`; echo ”  making dir $FN”; mkdir $FN; cd $FN; echo ”  foremosting begun @ `date`”; (foremost -t all -i ../$i > /dev/null 2>&1); cd ..; done;
 
If I wanted to get just files that began with boom xp and vista
Like I had a boom123.iso boom123-xp.iso vista-michigan.iso xp-minesota.dd, the following would work:
 
cd /forensics
 
for i in boom* xp* vista*; do echo “WORKING ON: $i”; FN=`echo $i | cut -d’.’ -f1`; echo ”  making dir $FN”; mkdir $FN; cd $FN; echo ”  foremosting begun @ `date`”; (foremost -t all -i ../$i > /dev/null 2>&1); cd ..; done;
 
–sample output:–
WORKING ON: boomer-win2003-2006-03-17.img
  making dir boomer-win2003-2006-03-17
  foremosting begun @ Fri May 31 00:42:50 PDT 2013
WORKING ON: boomer-win2k-2006-02-27-0824.img
  making dir boomer-win2k-2006-02-27-0824
  foremosting begun @ Fri May 31 00:43:25 PDT 2013
WORKING ON: xp-laptop-2005-06-25.img
  making dir xp-laptop-2005-06-25
  foremosting begun @ Fri May 31 00:44:03 PDT 2013
WORKING ON: xp-laptop-2005-07-04-1430.img
  making dir xp-laptop-2005-07-04-1430
  foremosting begun @ Fri May 31 00:44:31 PDT 2013
WORKING ON: vista-beta2.img
  making dir vista-beta2
  foremosting begun @ Fri May 31 00:45:00 PDT 2013
 
BONUS SCRIPT REVAMPED
#####################
 
Make sure all your dumps have a dot anything extension i prefer .dd or .iso
 
Put the following script in the /forensics folder
 
# touch memdumpscript
# chmod +x memdumpscipt
# vi memdumpscript
#!/bin/bash
INPUT1=$1
echo Goals: $INPUT1
for i in $INPUT1; do echo “WORKING ON: $i”; FN=`echo $i | cut -d’.’ -f1`; echo ”  making dir $FN”; mkdir $FN; cd $FN; echo ”  foremosting begun @ `date`”; (foremost -t all -i ../$i > /dev/null 2>&1); echo ”  exit status of foremost: $?”; echo ”  finished @ `date`”;  cd ..; done;
 
Now if you have the following:
 
/forensics/memdumpscript
/forensics/computer1dump.dd
/forensics/computer2dump.dd
/forensics/compram16gb.dd
/forensics/sallypc.dd
 
You can cover all of the foremosts like this
 
./memdumpscript “comp* sally*”
 
The end result will be:
 
/forensics/computer1dump/output/ <- foremost dump in there
/forensics/computer2dump/output/ <- foremost dump in there
/forensics/compram16gb/output/ <- foremost dump in there
/forensics/sallypc/output/ <- foremost dump in there
 
It uses bash completion 🙂
14Jan/14

NOTE TO SELF: Java aliasing

JAVA – note to self – Aliasing – When REPOINTS and when COPY happens

Below article is generated by Code @ very bottom of article:::
===============================================================

“C:\Program Files (x86)\Java\jdk1.7.0_21\bin\java” -Didea.launcher.port=7546 “-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 12.1.4\bin” -Dfile.encoding=UTF-8 -classpath “C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\charsets.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\deploy.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\javaws.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\jce.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\jfr.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\jfxrt.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\jsse.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\management-agent.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\plugin.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\resources.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\rt.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\access-bridge-32.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\dnsns.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\jaccess.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\localedata.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\sunec.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\sunjce_provider.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\sunmscapi.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\sunpkcs11.jar;C:\Program Files (x86)\Java\jdk1.7.0_21\jre\lib\ext\zipfs.jar;C:\Users\kkhlebopros\IdeaProjects\stringtest\out\production\stringtest;C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 12.1.4\lib\idea_rt.jar” com.intellij.rt.execution.application.AppMain stringtest

ALIASING
########
########

Here is the concept of aliasing. What things in java copy and what things in java make new references instead of a copy
Which ever one it is the effect is subtle (or large) and can cause bugs unless thought of carefully
The purpose of this is to make you comfortable with both aspects of aliasing where things copy and where things make new references

RULE OF THUMBS:
Primatives get copied
Objects (even wrapper types) point to new references – note Arrays are objects
Strings being the silly exception act like primatives in this case even though we know they are objects (because strings are immutable)

THE STEPS THAT I WILL BE TAKING TO EXAMINE ALIASING
1. Initialize 2 things (namely 1st thing and 2nd thing)
2. Give them different values
3. Set 2nd thing equal to 1st thing (I could of done 1st thing equal to 2nd thing, it doesnt matter – thats trivial)
4. Change the value of the 2nd thing (This is also trivial – we could change the value of the 1st thing)
5. Check if the change of the 2nd thing in step 2 changed the 1st thing

HOW TO INTERPRET RESULTS
At step 5 if the 2nd thing and the 1st thing are the same – meaning – if after setting 1st equal to 2nd thing and changing 2nd which also changes the 1st thing then:
* That’s a pointer pointing at a new memory location – not an actual copy
At step 5 if the 2nd thing and the 1st thing are different – meaning – if after setting 1st equal to 2nd thing and changing 2nd and the 1st thing remains unchanges:
* That’s a copy – a whole new memory location is made

ALIASING WITH STRINGS AND INTS
##############################
##############################

1 STRING
########
Declaring string A1 and B1 but setting to anything
A1 is set to ‘Cat’ & B1 is set to ‘Dog’
After setting – A1 is: Cat
After setting – B1 is: Dog
*** B1 set to A1 *** (Since String=String think of it like a primative=primative like an int=int, remember strings are Objects but behave like primatives sometimes because they are immutable)
After equal – A1 is: Cat
After equal – B1 is: Cat
B1 set to Human
After changing B1 – A1 is: Cat
After changing B1 – B1 is: Human
*** RESULT: Both different so its a COPY ***

2 STRING ARRAY – COPY ARRAY ALIASING
####################################
Init new strings array Array1 and Array2
Init new strings array – value of Array1:[Ljava.lang.String;@37fb1e
Init new strings array – value of Array2:[Ljava.lang.String;@1479feb
Init new strings array – value of Array1[1]:null
Init new strings array – value of Array2[1]:null
Setting Array[1] to ‘Cat’ and Array[2] to ‘Dog’
After value setting – value of Array1:[Ljava.lang.String;@37fb1e
After value setting – value of Array2:[Ljava.lang.String;@1479feb
After value setting – value of Array1[1]:Cat
After value setting – value of Array2[1]:Dog
***Setting Array1 equal to Array2*** (think object set equal to object)
After equal setting – value of Array1:[Ljava.lang.String;@37fb1e
After equal setting – value of Array2:[Ljava.lang.String;@37fb1e
After equal setting – value of Array1[1]:Cat
After equal setting – value of Array2[1]:Cat
Changing Array2[1] to ‘Man’
After changing Array2 – value of Array1:[Ljava.lang.String;@37fb1e
After changing Array2 – value of Array2:[Ljava.lang.String;@37fb1e
After changing Array2 – value of Array1[1]:Man
After changing Array2 – value of Array2[1]:Man
*** RESULT: Both same so its a REPOINT ***

3 STRING ARRAY – COPY ARRAY ENTRY ALIASING
##########################################
Init new strings array SArray1 and SArray2
Init new strings array – value of SArray1:[Ljava.lang.String;@1592174
Init new strings array – value of SArray2:[Ljava.lang.String;@a352a5
Init new strings array – value of SArray1[1]:null
Init new strings array – value of SArray2[1]:null
Setting SArray[1] to ‘Cat’ and SArray[2] to ‘Dog’
After value setting – value of SArray1:[Ljava.lang.String;@1592174
After value setting – value of SArray2:[Ljava.lang.String;@a352a5
After value setting – value of SArray1[1]:Cat
After value setting – value of SArray2[1]:Dog
*** Setting SArray1[1] equal to SArray2[1] *** (Think String = String like above)
After equal setting – value of SArray1:[Ljava.lang.String;@1592174
After equal setting – value of SArray2:[Ljava.lang.String;@a352a5
After equal setting – value of SArray1[1]:Cat
After equal setting – value of SArray2[1]:Cat
Changing SArray2[1] to ‘Man’
After changing SArray2 – value of SArray1:[Ljava.lang.String;@1592174
After changing SArray2 – value of SArray2:[Ljava.lang.String;@a352a5
After changing SArray2 – value of SArray1[1]:Cat
After changing SArray2 – value of SArray2[1]:Man
*** RESULT: Both different so its a COPY ***

4 INT
#####
PRE-COMMENT: Redirecting an array which are object – the result is a new reference (not a copy)
Init new int – value of i1: cant say yet – or would ERROR
Init new int – value of i2: cant say yet – or would ERROR
Setting i1 to 5 and i2 to 100
After setting – Value of i1:5
After setting – Value of i2:100
*** Setting i1 equal to i1 *** (think int=int, so primatives)
After equalizing – Value of i1:100
After equalizing – Value of i2:100
Changing i2 by adding 1 to i2 using ‘i2++;’ syntax
After changing i2 – Value of i1:100
After changing i2 – Value of i2:101
*** RESULT: Both different so its a COPY ***

5 ARRAY OF INTS (PRIMATIVE) – REASSIGNING AN ARRAY ENTRY
########################################################
PRE-COMMENT: Redirecting primative ints – the result is a copy (not a reference but a redirection)
Initializing iA1 and iA2
After init – Value of iA1:[I@1ea5671
After init – Value of iA2:[I@1d15445
Setting iA1 to 33 and iA2 to 500
After setting both – Value of iA1:[I@1ea5671
After setting both – Value of iA2:[I@1d15445
*** Setting iA1 equal to iA2 *** (think int=int, so primatives)
After equal – Value of iA1:[I@1ea5671
After equal – Value of iA2:[I@1d15445
Setting iA2[1]++
After changing iA2 – Value of iA1:[I@1ea5671
After changing iA2 – Value of iA2:[I@1d15445
After changing iA2 – Value of iA1[1]:500
After changing iA2 – Value of iA2[1]:501
*** RESULT: Both different so its a COPY ***

6 ARRAY OF INTS (PRIMATIVE) – REASSIGNING THE ARRAY
####################################################
PRE-COMMENT: Redirecting an array which are object – the result is a new reference (not a copy)
After initializing iAa1 and iAa2
After initializing – Value of iA1:[I@1f3aa07
After initializing – Value of iA2:[I@1fc2fb
Setting iAa1[1] to 320 and iAa2[1] to 520
After setting – Value of iA1:[I@1f3aa07
After setting – Value of iA2:[I@1fc2fb
*** Setting equal iAa1 and iAa2 *** (Think Array = Array so object = object)
After equal – Value of iA1:[I@1fc2fb
After equal – Value of iA2:[I@1fc2fb
Changing iAa2 by doing this iAa2++;
After changing iAa2 – Value of iAa1:[I@1fc2fb
After changing iAa2 – Value of iAa2:[I@1fc2fb
After changing iAa2 – Value of iA1[1]:521
After changing iAa2 – Value of iA2[1]:521
*** RESULT: Both same so its a REPOINT ***

7 INTEGER – WRAPPER
###################
Initializing iW1 and iW2
After initializing – Value of iW1: ERROR
After initializing – Value of iW2: ERROR
Setting iW1 and iW2 to 123 and 900 respectively
After setting – Value of iW1:123
After setting – Value of iW2:900
*** Setting iW1 equal to iW2 *** (Think Wrapper = Wrapper so object = object)
After equal – Value of iW1:900
After equal – Value of iW2:900
Changing iW2 by adding 1 to it with iW2++
After changing iW2 – Value of iW1:900
After changing iW2 – Value of iW2:901
*** RESULT: Both different so its a COPY ***

8 ARRAY OF INTEGERS (WRAPPER) – REASSIGNING AN ARRAY ENTRY
############################################################
Initializing iWa1 and iWa2
After initializing – Value of iWa1:[Ljava.lang.Integer;@139eeda
After initializing – Value of iWa2:[Ljava.lang.Integer;@704baa
After initializing – Value of iWa1[1]:null
After initializing – Value of iWa2[1]:null
Setting iWa1[1] and iWa2[1] to 50 and 80 respectively
After setting – Value of iWa1:[Ljava.lang.Integer;@139eeda
After setting – Value of iWa2:[Ljava.lang.Integer;@704baa
After setting – Value of iWa1[1]:50
After setting – Value of iWa2[1]:80
*** Setting iWa1[1] equal to iWa2[1] *** (Think Wrapper = Wrapper so object = object)
After equal – Value of iW1:[Ljava.lang.Integer;@139eeda
After equal – Value of iW2:[Ljava.lang.Integer;@704baa
After equal – Value of iW1[i]:80
After equal – Value of iW2[i]:80
Changing iWa2[1] by adding 1 to it with iWa2[1]++
After changing iWa2 – Value of iWa1:[Ljava.lang.Integer;@139eeda
After changing iWa2 – Value of iWa2:[Ljava.lang.Integer;@704baa
After changing iWa2 – Value of iWa1[i]:80
After changing iWa2 – Value of iWa2[i]:81
*** RESULT: Both different so its a COPY ***

9 ARRAY OF INTEGERS (WRAPPER) – REASSIGNING THE ARRAY
#######################################################
Initializing iWWa1 and iWWa2
After init – Value of iWWa1:[Ljava.lang.Integer;@77a7f9
After init – Value of iWWa2:[Ljava.lang.Integer;@1b4fad5
After init – Value of iWWa1[i]:null
After init – Value of iWWa2[i]:null
Setting iWWa1[1] and iWWa2[1] to 510 and 810 respectively
After setting – Value of iWWa1:[Ljava.lang.Integer;@77a7f9
After setting – Value of iWWa2:[Ljava.lang.Integer;@1b4fad5
After setting – Value of iWWa1[i]:510
After setting – Value of iWWa2[i]:810
*** Setting iWWa1 equal to iWWa2 *** (Think Array = Array so object = object)
After equal – Value of iWWa1:[Ljava.lang.Integer;@1b4fad5
After equal – Value of iWWa2:[Ljava.lang.Integer;@1b4fad5
After equal – Value of iWWa1[i]:810
After equal – Value of iWWa2[i]:810
Changing iWWa2[1] by adding 1 to it with iWWa2[1]++
After changing iWWa2 – Value of iWWa1:[Ljava.lang.Integer;@1b4fad5
After changing iWWa2 – Value of iWWa2:[Ljava.lang.Integer;@1b4fad5
After changing iWWa2 – Value of iWWa1[i]:811
After changing iWWa2 – Value of iWWa2[i]:811
*** RESULT: Both same so its a REPOINT ***

OBJECT AND ALIASING
###################
###################
Making class Pepsi with, int oz = 12, and, String taste = ‘taste’
Also going to show the effect of ++i and i++ during the step where we change the 2nd thing

10 OBJECT ALIASING – ON INT
############################
After init ‘Pepsi pepsi1’ – pepsi1.oz: ERROR
After init ‘Pepsi pepsi2’ – pepsi2.oz: ERROR
After setting – pepsi1.oz: 12
After setting – pepsi2.oz: 12
*** Going to set em equal like this pepsi2 = pepsi1; *** (Think Object = Object)
After equal – pepsi1.oz: 12
After equal – pepsi2.oz: 12
pepsi2.oz++ value changed after this line:12
++pepsi2.oz value changed before this line:14
After Changing pepsi2.oz by 2
End result – pepsi1.oz: 14
End result – pepsi2.oz: 14
*** RESULT: Both same so its a REPOINT ***

11 ARRAY OF OBJECTS – THE FULL ARRAY ALIASING – ON INT
######################################################
PRE-COMMENT: Since the subject in mind is an object (array are objects) – a setting will change reference and it will not a do a copy.
After init – pepsis1[1].oz: ERROR
After init – pepsis2[1].oz: ERROR
After setting – pepsis1[1].oz: 12
After setting – pepsis2[1].oz: 12
*** Going to set em equal like this pepsis2 = pepsis1; *** (Think Array = Array so Object = Object)
After Equal – pepsis1[1].oz: 12
After Equal – pepsis2[1].oz: 12
pepsis2[1].oz++ value changed after this line:12
++pepsis2[1].oz value changed before this line:14
After Changing pepsis2[1].oz by 2
End result after change – pepsis1[1].oz: 14
End result after change – pepsis2[1].oz: 14
*** RESULT: Both same so its a REPOINT ***

12 ARRAY OF OBJECTS – AN OBJECT ALIASING – ON INT
#################################################
PRE-COMMENT: Since the subject in mind is an object then it will act like an object – a setting will change reference and it will not a do a copy.
PRE-COMMENT: Since above case was also an object the effect will be the same.
After init – pepsisa1[1].oz: ERROR
After init – pepsisa2[1].oz: ERROR
After setting – pepsisa1[1].oz: 12
After setting – pepsisa2[1].oz: 12
*** Going to set em equal like this pepsisa2[1] = pepsisa1[1]; *** (Think Object = Object, why object because pepsisa#[X] is an object)
After equal – pepsisa1[1].oz: 12
After equal – pepsisa2[1].oz: 12
pepsisa2[1].oz++ value changed after this line:12
++pepsisa2[1].oz value changed before this line:14
After Changing pepsisa2[1].oz by 2
End Result – pepsisa1[1].oz: 14
End Result – pepsisa2[1].oz: 14
*** RESULT: Both same so its a REPOINT ***

13 ARRAY OF OBJECTS – AN OBJECTS VALUE ALIASING – ON INT
########################################################
PRE-COMMENT: Since the subject in mind is a value in this case an int, it will act like the int. So the primatives get copied.
After Init – pepsisaa1[1].oz: ERROR
After Init – pepsisaa2[1].oz: ERROR
After Setting – pepsisaa1[1].oz: 12
After Setting – pepsisaa2[1].oz: 12
*** Going to set em equal like this pepsisaa2[1].oz = pepsisaa1[1].oz; *** (Think String = String, which is an object=object, but since strings are unique its more like primative=primative like in above string cases)
After Equal – pepsisaa1[1].oz: 12
After Equal – pepsisaa2[1].oz: 12
pepsisaa2[1].oz++ value changed after this line:12
++pepsisaa2[1].oz value changed before this line:14
After Changing pepsisaa2[1].oz by 2
End Result – pepsisaa1[1].oz: 12
End Result – pepsisaa2[1].oz: 14
*** RESULT: Both different so its a COPY ***

14 OBJECT ALIASING – ON STRING
############################
After init ‘Pepsi Spepsi1’ – Spepsi1.taste: ERROR
After init ‘Pepsi Spepsi2’ – Spepsi2.taste: ERROR
After setting – Spepsi1.taste: The Best
After setting – Spepsi2.taste: The Best
*** Going to set em equal like this Spepsi2 = Spepsi1; *** (Think Object = Object)
After equal – Spepsi1.taste: The Best
After equal – Spepsi2.taste: The Best
After Changing taste to OK of Spepsi2.taste
End result – Spepsi1.taste: OK
End result – Spepsi2.taste: OK
*** RESULT: Both same so its a REPOINT ***

15 ARRAY OF OBJECTS – THE FULL ARRAY ALIASING – ON STRING
######################################################
PRE-COMMENT: Since the subject in mind is an object (array are objects) – a setting will change reference and it will not a do a copy.
After init – Spepsis1[1].taste: ERROR
After init – Spepsis2[1].taste: ERROR
After setting – Spepsis1[1].taste: The Best
After setting – Spepsis2[1].taste: The Best
*** Going to set em equal like this Spepsis2 = Spepsis1; *** (Think Array = Array so Object = Object)
After Equal – Spepsis1[1].taste: The Best
After Equal – Spepsis2[1].taste: The Best
After Changing taste to OK of Spepsis2[1].taste
End result after change – Spepsis1[1].taste: OK
End result after change – Spepsis2[1].taste: OK
*** RESULT: Both same so its a REPOINT ***

16 ARRAY OF OBJECTS – AN OBJECT ALIASING – ON STRING
#################################################
PRE-COMMENT: Since the subject in mind is an object then it will act like an object – a setting will change reference and it will not a do a copy.
PRE-COMMENT: Since above case was also an object the effect will be the same.
After init – Spepsisa1[1].taste: ERROR
After init – Spepsisa2[1].taste: ERROR
After setting – Spepsisa1[1].taste: The Best
After setting – Spepsisa2[1].taste: The Best
*** Going to set em equal like this Spepsisa2[1] = Spepsisa1[1]; *** (Think Object = Object, why object because pepsisa#[X] is an object)
After equal – Spepsisa1[1].taste: The Best
After equal – Spepsisa2[1].taste: The Best
After Changing taste to OK of Spepsisa2[1].taste
End Result – Spepsisa1[1].taste: OK
End Result – Spepsisa2[1].taste: OK
*** RESULT: Both same so its a REPOINT ***

17 ARRAY OF OBJECTS – AN OBJECTS VALUE ALIASING – ON STRING
########################################################
PRE-COMMENT: Since the subject in mind is a value in this case an int, it will act like the int. So the primatives get copied.
After Init – Spepsisaa1[1].taste: ERROR
After Init – Spepsisaa2[1].taste: ERROR
After Setting – Spepsisaa1[1].taste: The Best
After Setting – Spepsisaa2[1].taste: The Best
*** Going to set em equal like this Spepsisaa2[1].taste = Spepsisaa1[1].taste; *** (Think int = int)
After Equal – Spepsisaa1[1].taste: The Best
After Equal – Spepsisaa2[1].taste: The Best
After Changing taste to OK of Spepsisaa2[1].taste
End Result – Spepsisaa1[1].taste: The Best
End Result – Spepsisaa2[1].taste: OK
*** RESULT: Both different so its a COPY ***

Process finished with exit code 0

######################################################################################################################
######################################################################################################################
######################################################################################################################
######################################################################################################################
######################################################################################################################

 

14Jan/14

Hashes – Md5sum – Also: Best Windows Hash Tool

MD5SUM & HASHES
################
Check hashes – the fingerprint of the file – each file has thier own signature.
sha1sum (or shasum) works as well and has more bits
We use hashes to insure the file is still legit.
If you know the hash of some file should be abcd1234 and when you download it changes to 14234sdd, then might as well redownload it
md5sum only has so many possibilities, and sha1sum and other algos have more possibilities (thus more bits and characters in the answer).. The more bits/characters the less likely you are to have a collision.
Whats a collision? ITs when you have 2 different files or things that return the same fingerprint/hash, meaning the hash algo is broken (or not as good) or needs to have more bits.
IN WINDOWS!!!
#############
BEST TOOL:
It puts it in the explorer, meaning right click on a file and hit properties and you will have a new tab. This works on directories too
IN LINUX!!!
############
TO CHECK HASH OF EVERTHING
==========================
cd /dir
md5sum *
TO CHECK HASH OF A SELECTION OF THINGS
======================================
Select things with find tool
For example:
I want every file who has a folder with the name Jeremy and Chris in it
find -type f -iwholename “*Jeremy*” -or -iwholename “*chris*”
Now run md5sum thru it, a couple of ways:
With -exec:
find -type f -iwholename “*Jeremy*” -or -iwholename “*chris*” -exec md5sum {} \;
With xargs:
find -type f -iwholename “*Jeremy*” -or -iwholename “*chris*” -print0 | xargs -0 md5sum
If you dont use -print0 and -0 and your files have spaces this will not work
14Jan/14

Firefox Extensions for a Web Developer

FIREFOX IS A BETTER WEB DEVELOPER SITE (but make sure your site works with the 3 FIC Browsers – Firefox Internet Explorer and  Chrome)

Probably want your Chrome Bookmarks imported to Firefox: http://support.mozilla.org/en-US/kb/import-bookmarks-google-chrome


 

THE ADDONS BEGIN


Download Statusbar

SQLite Manager

Web Developer

https://addons.mozilla.org/en-US/firefox/addon/web-developer/

 


THE REST I GOT FROM HERE – ALL CREDIT TO THE LINK BELOW FOR HELPING ME FIND THESE NEXT ONES:

http://www.cssreflex.com/2010/04/20-awesome-firefox-add-ons-for-web-developers.html/

View Source Chart

Console²

Firebug

SEO Doctor

MeasureIt

IE Tab

QuickRestart

14Jan/14

EASTER EGG FOUND – Starwars Traceroute

EASTER EGG FOUND – Starwars Traceroute

In linux just type what I have below, in windows type: “tracert 216.81.59.173”
$ traceroute 216.81.59.173
traceroute to 216.81.59.173 (216.81.59.173), 30 hops max, 60 byte packets
 1  216.182.236.100 (216.182.236.100)  0.921 ms  0.576 ms  0.780 ms
 2  72.21.222.16 (72.21.222.16)  2.022 ms  2.042 ms  2.220 ms
 3  205.251.229.168 (205.251.229.168)  2.164 ms  2.094 ms  2.106 ms
 4  v308.core1.sjc2.he.net (216.218.142.49)  2.211 ms  2.219 ms  2.190 ms
 5  10gigabitethernet14-7.core1.lax2.he.net (184.105.213.5)  14.262 ms  14.263 ms  14.233 ms
 6  10gigabitethernet2-3.core1.phx2.he.net (184.105.222.85)  21.324 ms  21.061 ms  21.312 ms
 7  10gigabitethernet5-3.core1.dal1.he.net (184.105.222.78)  44.491 ms  44.642 ms  44.575 ms
 8  10gigabitethernet5-4.core1.atl1.he.net (184.105.213.114)  64.801 ms  64.798 ms  64.967 ms
 9  216.66.0.26 (216.66.0.26)  65.133 ms  64.868 ms  65.097 ms
10  * * *
11  Episode.IV (206.214.251.1)  107.767 ms  109.731 ms  111.801 ms
12  A.NEW.HOPE (206.214.251.6)  110.690 ms  108.626 ms  106.405 ms
13  It.is.a.period.of.civil.war (206.214.251.9)  107.405 ms  111.731 ms  112.729 ms
14  Rebel.spaceships (206.214.251.14)  110.564 ms  113.877 ms  108.447 ms
15  striking.from.a.hidden.base (206.214.251.17)  109.454 ms  105.922 ms  106.293 ms
16  have.won.their.first.victory (206.214.251.22)  109.006 ms  108.990 ms  109.194 ms
17  against.the.evil.Galactic.Empire (206.214.251.25)  105.714 ms  105.703 ms  106.079 ms
18  During.the.battle (206.214.251.30)  106.177 ms  106.140 ms  108.898 ms
19  Rebel.spies.managed (206.214.251.33)  108.897 ms  109.299 ms  106.181 ms
20  to.steal.secret.plans (206.214.251.38)  106.613 ms  106.787 ms  106.996 ms
21  to.the.Empires.ultimate.weapon (206.214.251.41)  106.823 ms  107.594 ms  107.592 ms
22  the.DEATH.STAR (206.214.251.46)  107.131 ms  107.047 ms  107.089 ms
23  an.armored.space.station (206.214.251.49)  106.411 ms  106.630 ms  106.538 ms
24  with.enough.power.to (206.214.251.54)  107.057 ms  106.403 ms  106.086 ms
25  destroy.an.entire.planet (206.214.251.57)  106.586 ms  106.468 ms  107.310 ms
26  Pursued.by.the.Empires (206.214.251.62)  107.403 ms  107.854 ms  107.820 ms
27  sinister.agents (206.214.251.65)  107.891 ms  106.235 ms  106.780 ms
28  Princess.Leia.races.home (206.214.251.70)  106.365 ms  107.509 ms  107.611 ms
29  aboard.her.starship (206.214.251.73)  108.358 ms  107.923 ms  107.972 ms
30  custodian.of.the.stolen.plans (206.214.251.78)  106.948 ms  107.353 ms  106.857 ms
14Jan/14

ANDROID – ADB Backup/Restore- ADB Extractor (abe) – Extracting adbs with openssl

ADB BACKUPS
############
 
* Note Citing my sources throughout the article.
 
Backing up and Restoring
========================
Good articles: http://forum.xda-developers.com/showthread.php?t=1420351
Another good article: http://www.thesuperusersguide.com/adb-backup–restore.html
 
First install  Android SDK Platform Tools on windows usually it goes to “C:\android-sdk-windows\platform-tools”
 
On phone enable USB Debugging from SEttings->Developer Options
 
Open explorer and go to “C:\android-sdk-windows” directory and hold shift while right clicking on “platform-tools” and select “open command window here”, thats a good shortcut to open a cmd window into any folder
 
adb devices
 
To see connected devices
 
if you see your device
 
BACKING UP
———-
 
adb backup -apk -shared -all -f C:\outputfile.ab
 
* NOTE ABOUT FLAGS (excerpts from: http://forum.xda-developers.com/showthread.php?t=1420351)
-apk|-noapk: This flags whether or not the APKs should be included in the backup or just the apps’ respective data. I personally use -apk just in case the app isn’t available in the Market, so that I don’t have to go hunt it down again. The default is -noapk.
-shared|-noshared: This flag is used to “enable/disable backup of the device’s shared storage / SD card contents; the default is noshared.”, which for the Nexus I would certainly flag to -shared, but from my test, it did not restore all of the contents of my internal storage, so I recommend backing up music, pictures, video, and other internal storage items manually, just to be on the safe side. The default is -noshared.
-all: This flag is just an easy way to say to backup ALL apps. The packages flag (further on) can be used to choose individual packages, but unless you’re just wanting to backup a specific application, use -all for a full system backup.
-f <filename>: specifies file save location on the computer
 
Now on phone you should see a screen, if you see a lock screen then unlock it.
 
Click backup now or set a password and backup. Note if you set a password using ADB Extractor becomes a hassle and using simple linux tools like openssl becomes impossible (at least undocumented on the web as far as I can tell, besides looking through the source code of android or the abe.jar)
 
RESTORING
———-
 
adb restore C:\backup\mybackup.ab
 
Then look at phone, unlock the lock if you need to and if you need to type in a password then type it in.
 
How to Extract ADB backups With ADB Extractor & Manual extraction with openssl
===============================================================================
 
LINK: http://sourceforge.net/projects/adbextractor/
* Download the zip file.
* Extract the abe.jar
* JUST COPY abe.jar INTO DIRECTORY WHERE YOU HAVE adb backup .bak or .ad OR .whatever FILE AS LONG AS IT WAS MADE WITH adb backup (OR MAYBE AS LONG AS IT WAS JUST MADE WITH adb)
* JUST OPEN CYGWIN INTO DIRECTORY AND RUN:
# java -jar abe.jar unpack backup.bak backup.tar [password]
 
* Prerequisties for above method:
– To test run “java -jar abe.jar” then just CONTROL-C, if works good you will have no output, if works bad you will have errors as seen below.
– openjdk-7-jdk (openjdk-6-jdk causes errors)
– check vurrent active version with “java -version”
– if output of “java -version”:
—- java version “1.7.0_21” GOOD
—- java version “1.6.0_27” NOT GOOD
Error message when running the program looks like this:
Exception in thread “main” java.lang.UnsupportedClassVersionError: org/nick/abe/Main : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:634)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:266)
        at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:56)
 
* Note running the program with the incorrect prereqs doesnt hurt the source file
 
* Note if there is a password make sure you have the following:
Oracle Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 7 if you are going to work with password encrypted backups. 
You need to install the files local_policy.jar and US_export_policy.jar under jre’s lib/security folder, for example:
– For Windows:
C:\Program Files\Java\jdk1.7.0_09\jre\lib\security\
C:\Program Files\Java\jre7\lib\security\
C:\Program Files (x86)\Java\jdk1.7.0_07\jre\lib\security\
C:\Program Files (x86)\Java\jre7\lib\security\
– For Linux or BSD:
/usr/local/jdk1.7/jre/lib/security/ 
/usr/lib/jvm/java-7-openjdk-*/jre/lib/security/
/usr/local/openjdk7/jre/lib/security/
– For OS X:
/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/security/
 
Another way to extract with abe.jar, only works with none password encrypted backups:
 
# dd if=nexus7.ab bs=24 skip=1 | openssl zlib -d > nexus7.tar
Get pv to get a progress bar 🙂 LINUX: “apt-get install pv” or CYGWIN:”apt-cyg install pv”
# dd if=nexus7.ab bs=24 skip=1 | pv | openssl zlib -d > nexus7.tar
The above pv command just provides speeds but no measure of when the task will be complete (no ETA and no PERCENTAGE PROG BAR), below method has a PROG BAR because it gets the size in bytes into “pv -s” argument
# SRC1=”nexus7.ab”; dd if=${SRC1} bs=24 skip=1 | pv -s `stat –format=”%s” ${SRC1}` | openssl zlib -d > nexus7.tar
 
Sidenote about packing the file as written in the forum:
# dd if=nexus7.ab bs=24 count=1 of=gta3.ab ; openssl zlib -in gta3.tar >> gta3.ab
* “dd if=nexus7.ab bs=24 count=1 of=gta3.ab” that portion of the command gets the 24 byte header, as its the same on all adbs (or most), then the next portion appends the openssl zlib encrpyted file
WITH PROGRESS BAR CAN GET SPEED
# dd if=nexus7.ab bs=24 count=1 of=gta3.ab ; openssl zlib -in gta3.tar | pv >> gta3.ab
14Jan/14

SQL HOW TO

##############

What is MySQL:

##############

By: infotinks

 This is just a copy paste of some articles from here. All credit goes to them. I only claim that I formatted it like this an cancotanated it all like so

WORK CITED/BIBLIOGRAPHY: http://www.w3schools.com/sql/default.asp

### How data stored in relational db? ###

* Data is stored in tables

* Relational database has tables

* Data stored in tables has name, table has columns and rows

* Record equals row

* columns have a name

* structured like English

A query is a command window. Where you can type a command or two and execute them. You can put keywords like GO and — to control the flow of the commands

There is also a GUI thing you can do for every Command line query, here how ever I will just cover the querys

To execute commands above use GO n. where is an integer like GO 10. Will repeat the commands above the GO 10 times

##############

Make DATABASE:

##############

CREATE DATABASE database_name

###########

Make Table:

###########

NOTE:

Make sure to specify the key, and NULL or NOT NULL, you will see what this means after reading this section

===Simplest Form:===

CREATE TABLE table_name

(

column_name1 data_type,

column_name2 data_type,

column_name3 data_type,

….

)

===Example:===

CREATE TABLE Persons

(

P_Id int,

LastName varchar(255),

FirstName varchar(255),

Address varchar(255),

City varchar(255)

)

CONSTRAINTS ON MAKING A TABLE

#############################

When making a table can have 6 different constraints (important ones are NULL and PRIMARY KEY, so ill start with those):

### NOT NULL ###

By default, a table column can hold NULL values… Thus if not specified like in above simple case its like saying NULL, thus meaning can hold empty/null values

The NOT NULL constraint enforces a field to always contain a value. This means that you cannot insert a new record, or update a record without adding a value to this field.

===example:===

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255)

)

### UNIQUE ###

Meaning cant have more than one of the same value, like the ID of people. Cant have entry number 12 and entry number 12 again.

The UNIQUE constraint uniquely identifies each record in a database table.

The UNIQUE and PRIMARY KEY constraints both provide a guarantee for uniqueness for a column or set of columns.

A PRIMARY KEY constraint automatically has a UNIQUE constraint defined on it.

===Example:===

Slightly different form in SQL Server/Oracle/MS Access vs MySQL:

===MySQL:===

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255),

UNIQUE (P_Id)

)

===SQL Server / Oracle / MS Access:===

CREATE TABLE Persons

(

P_Id int NOT NULL UNIQUE,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255)

)

===Multiple Example (SAME FOR ALL SQL programs):===

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a UNIQUE constraint, and for defining a UNIQUE constraint on multiple columns, use the following SQL syntax:

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255),

CONSTRAINT uc_PersonID UNIQUE (P_Id,LastName)

)

### PRIMARY KEY ###

The PRIMARY KEY constraint uniquely identifies each record in a database table.

Primary keys must contain unique values.

A primary key column cannot contain NULL values.

Each table should have a primary key, and each table can have only ONE primary key.

===Example:===

Its different with MySQL vs SQL Server/Oracle/MS access:

===MySQL:===

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255),

PRIMARY KEY (P_Id)

)

===SQL Server / Oracle / MS Access:===

CREATE TABLE Persons

(

P_Id int NOT NULL PRIMARY KEY,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255)

)

===Multiple Example (SAME FOR ALL SQL programs):===

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a PRIMARY KEY constraint, and for defining a PRIMARY KEY constraint on multiple columns, use the following SQL syntax:

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255),

CONSTRAINT pk_PersonID PRIMARY KEY (P_Id,LastName)

)

### FOREIGN KEY ###

SQL FOREIGN KEY Constraint

A FOREIGN KEY in one table points to a PRIMARY KEY in another table.

Let’s illustrate the foreign key with an example. Look at the following two tables:

The “Persons” table:

P_Id, LastName, FirstName, Address, City

1, Hansen, Ola, Timoteivn 10, Sandnes

2, Svendson, Tove, Borgvn 23, Sandnes

3, Pettersen, Kari, Storgt 20, Stavanger

The “Orders” table:

O_Id, OrderNo, P_Id

1, 77895, 3

2, 44678, 3

3, 22456, 2

4, 24562, 1

Note that the “P_Id” column in the “Orders” table points to the “P_Id” column in the “Persons” table.

The “P_Id” column in the “Persons” table is the PRIMARY KEY in the “Persons” table.

The “P_Id” column in the “Orders” table is a FOREIGN KEY in the “Orders” table.

The FOREIGN KEY constraint is used to prevent actions that would destroy links between tables.

The FOREIGN KEY constraint also prevents that invalid data form being inserted into the foreign key column, because it has to be one of the values contained in the table it points to.

SQL FOREIGN KEY Constraint on CREATE TABLE

The following SQL creates a FOREIGN KEY on the “P_Id” column when the “Orders” table is created:

===MySQL:===

CREATE TABLE Orders

(

O_Id int NOT NULL,

OrderNo int NOT NULL,

P_Id int,

PRIMARY KEY (O_Id),

FOREIGN KEY (P_Id) REFERENCES Persons(P_Id)

)

===SQL Server / Oracle / MS Access:===

CREATE TABLE Orders

(

O_Id int NOT NULL PRIMARY KEY,

OrderNo int NOT NULL,

P_Id int FOREIGN KEY REFERENCES Persons(P_Id)

)

===Multiple Example (SAME FOR ALL SQL programs):===

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a FOREIGN KEY constraint, and for defining a FOREIGN KEY constraint on multiple columns, use the following SQL syntax:

CREATE TABLE Orders

(

O_Id int NOT NULL,

OrderNo int NOT NULL,

P_Id int,

PRIMARY KEY (O_Id),

CONSTRAINT fk_PerOrders FOREIGN KEY (P_Id)

REFERENCES Persons(P_Id)

)

### CHECK ###

The CHECK constraint is used to limit the value range that can be placed in a column.

If you define a CHECK constraint on a single column it allows only certain values for this column.

If you define a CHECK constraint on a table it can limit the values in certain columns based on values in other columns in the row.

===MySQL:===

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255),

CHECK (P_Id>0)

)

===SQL Server / Oracle / MS Access:===

CREATE TABLE Persons

(

P_Id int NOT NULL CHECK (P_Id>0),

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255)

)

===Multiple Example (SAME FOR ALL SQL programs):===

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a CHECK constraint, and for defining a CHECK constraint on multiple columns, use the following SQL syntax:

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255),

CONSTRAINT chk_Person CHECK (P_Id>0 AND City=’Sandnes’)

)

### DEFAULT ###

The DEFAULT constraint is used to insert a default value into a column.

The default value will be added to all new records, if no other value is specified.

===NOTE SAME ON ALL SQL Programs:===

===My SQL / SQL Server / Oracle / MS Access:===

CREATE TABLE Persons

(

P_Id int NOT NULL,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255) DEFAULT ‘Sandnes’

)

==WITH GETDATE:===

The DEFAULT constraint can also be used to insert system values, by using functions like GETDATE():

===Example:===

CREATE TABLE Orders

(

O_Id int NOT NULL,

OrderNo int NOT NULL,

P_Id int,

OrderDate date DEFAULT GETDATE()

)

More Contraints:

################

### Auto Increment ###

Auto-increment allows a unique number to be generated when a new record is inserted into a table.

AUTO INCREMENT a Field

Very often we would like the value of the primary key field to be created automatically every time a new record is inserted.

We would like to create an auto-increment field in a table.

===Syntax for MySQL===

The following SQL statement defines the “P_Id” column to be an auto-increment primary key field in the “Persons” table:

CREATE TABLE Persons

(

P_Id int NOT NULL AUTO_INCREMENT,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255),

PRIMARY KEY (P_Id)

)

===Syntax for SQL Server===

The following SQL statement defines the “P_Id” column to be an auto-increment primary key field in the “Persons” table:

CREATE TABLE Persons

(

P_Id int PRIMARY KEY IDENTITY,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255)

)

===Syntax for Access===

The following SQL statement defines the “P_Id” column to be an auto-increment primary key field in the “Persons” table:

CREATE TABLE Persons

(

P_Id PRIMARY KEY AUTOINCREMENT,

LastName varchar(255) NOT NULL,

FirstName varchar(255),

Address varchar(255),

City varchar(255)

)

===Syntax for Oracle===

In Oracle the code is a little bit more tricky.

You will have to create an auto-increment field with the sequence object (this object generates a number sequence).

Use the following CREATE SEQUENCE syntax:

CREATE SEQUENCE seq_person

MINVALUE 1

START WITH 1

INCREMENT BY 1

CACHE 10

##############################

Put/INSERT entries from table:

##############################

2 Forms/ways/methods to do this:

### Form 1: ###

INSERT INTO table_name

VALUES (value1, value2, value3,…)

—Example:—

INSERT INTO Persons

VALUES (4,’Nilsen’, ‘Johan’, ‘Bakken 2’, ‘Stavanger’)

### Form 2: ###

INSERT INTO table_name (column1, column2, column3,…)

VALUES (value1, value2, value3,…)

—Example:—

INSERT INTO Persons (P_Id, LastName, FirstName)

VALUES (5, ‘Tjessem’, ‘Jakob’)

################

Change an entry:

################

UPDATE table_name

SET column1=value, column2=value2,…

WHERE some_column=some_value

—example:—

UPDATE Persons

SET Address=’Nissestien 67′, City=’Sandnes’

WHERE LastName=’Tjessem’ AND FirstName=’Jakob’

###################################

Deleting/DELETE entries from table:

###################################

DELETE FROM table_name

WHERE some_column=some_value

—example:—

DELETE from table

WHERE firstname=’koss’

#######

SELECT:

#######

Used to display results

SELECT column FROM table

Can select 1 or more colums, if more seperate with comma

All *

SELECT column1,column2 FROM table

This tutorial has colums and tables in quotes like so, but Im not using them, because it tends to work without quotes as well… double quotes here by the way… when you set values to string use single quotes

SELECT “column1″,”column2” FROM “table”

To Select all

SELECT * FROM table

################

SELECT DISTINCT:

################

To select Unique values:

SELECT DISTINCT column FROM table

######

WHERE:

######

Can put WHERE to select subset, its like a filter

SELECT Column FROM table WHERE condition

NOTE with WHERES can have AND/OR

SELECT * FROM Persons

WHERE FirstName=’Tove’

OR FirstName=’Ola’

SELECT * FROM Persons

WHERE FirstName=’Tove’

AND LastName=’Svendson’

######

ORDER:

######

If want to order the above

add ORDER BY column [ASC|DESC] to the end

SELECT Column FROM table WHERE condition ORDER BY column [ASC|DESC]

OR verticly with line feeds

SELECT “Column_Name”

FROM “Table_Name”

WHERE “Condition”

ORDER BY “Column_Name” [ASC|DESC]

ASC=Ascending… grows, starts from low goes up– 1 to 1000, a to z

DESC=Descending… shrinks, starts from highest and goes down– 1000 to 1, z to a

#########

GROUP BY:

#########

SELECT column1, FUNCTION(column2) FROM table GROUP BY column1

or represented like this

SELECT column1,

  FUNCTION(column2)

FROM table

GROUP BY column1

First talk about functions — SUM, AVG, MIN, MAX, COUNT operates on column

—Example:—

Date, Store, Sales_Amount

Highest Sales entry

SELECT MAX(Sales_amount)

Highest Sales per store

SELECT store, MAX(Sales_amount)

its not enough

need Group By

We want to tell SQL to group same names, thus telling us the info for each store

SELECT c1, FUNCTION(c2) FROM t1 GROUP BY c1

#######

HAVING:

#######

Goes together with GROUP BY quiet well

Also can filter based on results of a function, not just the entry. so instead of WHERE where it looks at the entry we will use HAVING(CONDITION based on FUNCTION) which filters based on result of function

SELECT c1, FUNCTION(c2) FROM t1 GROUP BY c1 HAVING(CONDITION based on FUNCTION)

—-

SELECT c1,

 FUNCTION(c2)

FROM t1

GROUP BY c1

HAVING(CONDITION based on FUNCTION)

HAVING needs to go after the GROUP BY or else we would use WHERE

Or in other words:

Any filtering result based on a group by needs to be calculated after the grouping happens so thats why

HAVING goes after GROUP BY

HAVING SUM(sales)>1500

### Example: ###

—Table Store_Information:—

store_name,     Sales,     Date

Los Angeles,    $1500,     Jan-05-1999

San Diego, $250, Jan-07-1999

Los Angeles,    $300, Jan-08-1999

Boston,    $700, Jan-08-1999

—HAVING example:—

SELECT store_name, SUM(sales)

FROM Store_Information

GROUP BY store_name

HAVING SUM(sales) > 1500

—Result:—-

store_name,          SUM(Sales)

Los Angeles,         $1800

Read more: http://www.1keydata.com/sql/sqlhaving.html#ixzz2D6YoU3re

”It would group all the stores and add them up and then give the MAX

##############################

Example with WHERE and HAVING:

##############################

We have the following “Orders” table:

O_Id, OrderDate, OrderPrice, Customer

1, 2008/11/12, 1000, Hansen

2, 2008/10/23, 1600, Nilsen

3, 2008/09/02, 700, Hansen

4, 2008/09/03, 300, Hansen

5, 2008/08/30, 2000, Jensen

6, 2008/10/04, 100, Nilsen

Now we want to find if any of the customers have a total order of less than 2000.

We use the following SQL statement:

SELECT Customer,SUM(OrderPrice) FROM Orders

GROUP BY Customer

HAVING SUM(OrderPrice)<2000

The result-set will look like this:

Customer, SUM(OrderPrice)

Nilsen, 1700

Now we want to find if the customers “Hansen” or “Jensen” have a total order of more than 1500.

We add an ordinary WHERE clause to the SQL statement:

SELECT Customer,SUM(OrderPrice) FROM Orders

WHERE Customer=’Hansen’ OR Customer=’Jensen’

GROUP BY Customer

HAVING SUM(OrderPrice)>1500

The result-set will look like this:

Customer, SUM(OrderPrice)

Hansen, 2000

Jensen, 2000

######################

Order of the Commands:

######################

SELECT … FROM

WHERE

GROUP BY

HAVING

ORDER BY

#######################

ALTER: Changing Tables:

#######################

### The ALTER TABLE Statement ###

The ALTER TABLE statement is used to add, delete, or modify columns in an existing table.

===SQL ALTER TABLE Syntax===

To add a column in a table, use the following syntax:

ALTER TABLE table_name

ADD column_name datatype

To delete a column in a table, use the following syntax (notice that some database systems don’t allow deleting a column):

ALTER TABLE table_name

DROP COLUMN column_name

To change the data type of a column in a table, use the following syntax:

—SQL Server / MS Access:—

ALTER TABLE table_name

ALTER COLUMN column_name datatype

—-My SQL / Oracle:—

ALTER TABLE table_name

MODIFY column_name datatype

### SQL ALTER TABLE Example ###

Look at the “Persons” table:

P_Id, LastName, FirstName, Address, City

1, Hansen, Ola, Timoteivn 10, Sandnes

2, Svendson, Tove, Borgvn 23, Sandnes

3, Pettersen, Kari, Storgt 20, Stavanger

Now we want to add a column named “DateOfBirth” in the “Persons” table.

We use the following SQL statement:

ALTER TABLE Persons

ADD DateOfBirth date

Notice that the new column, “DateOfBirth”, is of type date and is going to hold a date. The data type specifies what type of data the column can hold. For a complete reference of all the data types available in MS Access, MySQL, and SQL Server, go to our complete Data Types reference.

The “Persons” table will now like this:

P_Id, LastName, FirstName, Address, City, DateOfBirth

1, Hansen, Ola, Timoteivn 10, Sandnes

2, Svendson     Tove, Borgvn 23, Sandnes

3, Pettersen, Kari, Storgt 20, Stavanger

### Change Data Type Example ###

Now we want to change the data type of the column named “DateOfBirth” in the “Persons” table.

We use the following SQL statement:

ALTER TABLE Persons

ALTER COLUMN DateOfBirth year

Notice that the “DateOfBirth” column is now of type year and is going to hold a year in a two-digit or four-digit format.

### DROP COLUMN Example ###

Next, we want to delete the column named “DateOfBirth” in the “Persons” table.

We use the following SQL statement:

ALTER TABLE Persons

DROP COLUMN DateOfBirth

The “Persons” table will now like this:

P_Id, LastName, FirstName, Address, City

1, Hansen, Ola, Timoteivn 10, Sandnes

2, Svendson, Tove, Borgvn 23, Sandnes

3, Pettersen, Kari, Storgt 20, Stavanger

CONSTRAINTS and ALTER

#####################

### ALTER with NOT NULL ###

N/A

### ALTER with UNIQUE ###

===MySQL / SQL Server / Oracle / MS Access:===

To create a UNIQUE constraint on the “P_Id” column when the table is already created, use the following SQL:

ALTER TABLE Persons

ADD UNIQUE (P_Id)

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a UNIQUE constraint, and for defining a UNIQUE constraint on multiple columns, use the following SQL syntax:

ALTER TABLE Persons

ADD CONSTRAINT uc_PersonID UNIQUE (P_Id,LastName)

### ALTER with PRIMARY KEY ###

===MySQL / SQL Server / Oracle / MS Access:===

To create a PRIMARY KEY constraint on the “P_Id” column when the table is already created, use the following SQL:

ALTER TABLE Persons

ADD PRIMARY KEY (P_Id)

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a PRIMARY KEY constraint, and for defining a PRIMARY KEY constraint on multiple columns, use the following SQL syntax:

ALTER TABLE Persons

ADD CONSTRAINT pk_PersonID PRIMARY KEY (P_Id,LastName)

Note: If you use the ALTER TABLE statement to add a primary key, the primary key column(s) must already have been declared to not contain NULL values (when the table was first created).

### ALTER with FOREIGN KEY ###

===MySQL / SQL Server / Oracle / MS Access:===

To create a FOREIGN KEY constraint on the “P_Id” column when the “Orders” table is already created, use the following SQL:

ALTER TABLE Orders

ADD FOREIGN KEY (P_Id)

REFERENCES Persons(P_Id)

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a FOREIGN KEY constraint, and for defining a FOREIGN KEY constraint on multiple columns, use the following SQL syntax:

ALTER TABLE Orders

ADD CONSTRAINT fk_PerOrders

FOREIGN KEY (P_Id)

REFERENCES Persons(P_Id)

### ALTER with CHECK ###

===MySQL / SQL Server / Oracle / MS Access:===

To create a CHECK constraint on the “P_Id” column when the table is already created, use the following SQL:

ALTER TABLE Persons

ADD CHECK (P_Id>0)

===MySQL / SQL Server / Oracle / MS Access:===

To allow naming of a CHECK constraint, and for defining a CHECK constraint on multiple columns, use the following SQL syntax:

ALTER TABLE Persons

ADD CONSTRAINT chk_Person CHECK (P_Id>0 AND City=’Sandnes’)

### ALTER with DEFAULT ###

To create a DEFAULT constraint on the “City” column when the table is already created, use the following SQL:

===MySQL:===

ALTER TABLE Persons

ALTER City SET DEFAULT ‘SANDNES’

===SQL Server / MS Access:===

ALTER TABLE Persons

ALTER COLUMN City SET DEFAULT ‘SANDNES’

===Oracle:===

ALTER TABLE Persons

MODIFY City DEFAULT ‘SANDNES’

##################################

DROP: Removing things from tables:

##################################

Indexes, tables, and databases can easily be deleted/removed with the DROP statement.

===The DROP INDEX Statement===

The DROP INDEX statement is used to delete an index in a table.

—DROP INDEX Syntax for MS Access:—

DROP INDEX index_name ON table_name

—DROP INDEX Syntax for MS SQL Server:—

DROP INDEX table_name.index_name

—DROP INDEX Syntax for DB2/Oracle:—

DROP INDEX index_name

—-DROP INDEX Syntax for MySQL:—

ALTER TABLE table_name DROP INDEX index_name

===The DROP TABLE Statement===

The DROP TABLE statement is used to delete a table.

DROP TABLE table_name

===The DROP DATABASE Statement===

The DROP DATABASE statement is used to delete a database.

DROP DATABASE database_name

===The TRUNCATE TABLE Statement===

What if we only want to delete the data inside the table, and not the table itself?

Then, use the TRUNCATE TABLE statement:

TRUNCATE TABLE table_name

#### DROP with NOT NULL ###

NA

#### DROP with UNIQUE ###

To drop a UNIQUE constraint, use the following SQL:

===MySQL:===

ALTER TABLE Persons

DROP INDEX uc_PersonID

==SQL Server / Oracle / MS Access:===

ALTER TABLE Persons

DROP CONSTRAINT uc_PersonID

#### DROP with PRIMARY KEY ###

To drop a PRIMARY KEY constraint, use the following SQL:

===MySQL:===

ALTER TABLE Persons

DROP PRIMARY KEY

===SQL Server / Oracle / MS Access:===

ALTER TABLE Persons

DROP CONSTRAINT pk_PersonID

#### DROP with FOREIGN KEY ###

To drop a FOREIGN KEY constraint, use the following SQL:

===MySQL:===

ALTER TABLE Orders

DROP FOREIGN KEY fk_PerOrders

===SQL Server / Oracle / MS Access:===

ALTER TABLE Orders

DROP CONSTRAINT fk_PerOrders

#### DROP with CHECK ###

< NOTE WHEN I MADE THIS THE MySQL was at the bottom and the SQL SERVER/ORACLE/MS ACCESS were at the top, I moved the sections around to be consistent with the other DROP CONSTRAINTS >

To drop a CHECK constraint, use the following SQL:

===MySQL:===

ALTER TABLE Persons

DROP CHECK chk_Person

===SQL Server / Oracle / MS Access:===

ALTER TABLE Persons

DROP CONSTRAINT chk_Person

#### DROP with DEFAULT ###

To drop a DEFAULT constraint, use the following SQL:

===MySQL:===

ALTER TABLE Persons

ALTER City DROP DEFAULT

===SQL Server / Oracle / MS Access:===

ALTER TABLE Persons

ALTER COLUMN City DROP DEFAULT

#############################

INDEXES: Speeding up Queries:

#############################

The CREATE INDEX statement is used to create indexes in tables.

Indexes allow the database application to find data fast; without reading the whole table.

### Indexes ###

An index can be created in a table to find data more quickly and efficiently.

The users cannot see the indexes, they are just used to speed up searches/queries.

Note: Updating a table with indexes takes more time than updating a table without (because the indexes also need an update). So you should only create indexes on columns (and tables) that will be frequently searched against.

===SQL CREATE INDEX Syntax:===

Creates an index on a table. Duplicate values are allowed:

CREATE INDEX index_name

ON table_name (column_name)

===SQL CREATE UNIQUE INDEX Syntax:===

Creates a unique index on a table. Duplicate values are not allowed:

CREATE UNIQUE INDEX index_name

ON table_name (column_name)

Note: The syntax for creating indexes varies amongst different databases. Therefore: Check the syntax for creating indexes in your database.

### CREATE INDEX Example ###

The SQL statement below creates an index named “PIndex” on the “LastName” column in the “Persons” table:

CREATE INDEX PIndex

ON Persons (LastName)

If you want to create an index on a combination of columns, you can list the column names within the parentheses, separated by commas:

CREATE INDEX PIndex

ON Persons (LastName, FirstName)

##########

FUNCTIONS:

##########

SQL has many built-in functions for performing calculations on data.

### SQL Aggregate Functions ###

SQL aggregate functions return a single value, calculated from values in a column.

Useful aggregate functions:

AVG() – Returns the average value

COUNT() – Returns the number of rows

FIRST() – Returns the first value

LAST() – Returns the last value

MAX() – Returns the largest value

MIN() – Returns the smallest value

SUM() – Returns the sum

###SQL Scalar functions ###

They change the way it looks in a way

SQL scalar functions return a single value, based on the input value.

Useful scalar functions:

UCASE() – Converts a field to upper case

LCASE() – Converts a field to lower case

MID() – Extract characters from a text field

LEN() – Returns the length of a text field

ROUND() – Rounds a numeric field to the number of decimals specified

NOW() – Returns the current system date and time

FORMAT() – Formats how a field is to be displayed

Tip: The aggregate functions and the scalar functions will be explained in details in the next chapters.

EXAMPLES OF SOME OF THE FUNCTIONS

#################################

### The AVG() Function ###

The AVG() function returns the average value of a numeric column.

SQL AVG() Syntax

SELECT AVG(column_name) FROM table_name

===SQL AVG() Example===

We have the following “Orders” table:

O_Id OrderDate  OrderPrice Customer

1    2008/11/12 1000 Hansen

2    2008/10/23 1600 Nilsen

3    2008/09/02 700  Hansen

4    2008/09/03 300  Hansen

5    2008/08/30 2000 Jensen

6    2008/10/04 100  Nilsen

Now we want to find the average value of the “OrderPrice” fields.

We use the following SQL statement:

SELECT AVG(OrderPrice) AS OrderAverage FROM Orders

The result-set will look like this:

OrderAverage

950

Now we want to find the customers that have an OrderPrice value higher than the average OrderPrice value.

We use the following SQL statement:

SELECT Customer FROM Orders

WHERE OrderPrice>(SELECT AVG(OrderPrice) FROM Orders)

The result-set will look like this:

Customer

Hansen

Nilsen

Jensen

### SQL COUNT() Function ###

===SQL COUNT(column_name) Syntax===

The COUNT(column_name) function returns the number of values (NULL values will not be counted) of the specified column:

SELECT COUNT(column_name) FROM table_name

===SQL COUNT(*) Syntax===

The COUNT(*) function returns the number of records in a table:

SELECT COUNT(*) FROM table_name

SQL COUNT(DISTINCT column_name) Syntax

The COUNT(DISTINCT column_name) function returns the number of distinct values of the specified column:

SELECT COUNT(DISTINCT column_name) FROM table_name

Note: COUNT(DISTINCT) works with ORACLE and Microsoft SQL Server, but not with Microsoft Access.

SQL COUNT(column_name) Example

We have the following “Orders” table:

O_Id OrderDate  OrderPrice Customer

1    2008/11/12 1000 Hansen

2    2008/10/23 1600 Nilsen

3    2008/09/02 700  Hansen

4    2008/09/03 300  Hansen

5    2008/08/30 2000 Jensen

6    2008/10/04 100  Nilsen

Now we want to count the number of orders from “Customer Nilsen”.

We use the following SQL statement:

SELECT COUNT(Customer) AS CustomerNilsen FROM Orders

WHERE Customer=’Nilsen’

The result of the SQL statement above will be 2, because the customer Nilsen has made 2 orders in total:

CustomerNilsen

2

===SQL COUNT(*) Example===

If we omit the WHERE clause, like this:

SELECT COUNT(*) AS NumberOfOrders FROM Orders

The result-set will look like this:

NumberOfOrders

6

which is the total number of rows in the table.

SQL COUNT(DISTINCT column_name) Example

Now we want to count the number of unique customers in the “Orders” table.

We use the following SQL statement:

SELECT COUNT(DISTINCT Customer) AS NumberOfCustomers FROM Orders

The result-set will look like this:

NumberOfCustomers

3

which is the number of unique customers (Hansen, Nilsen, and Jensen) in the “Orders” table.

## The UCASE() Function ###

The UCASE() function converts the value of a field to uppercase.

SQL UCASE() Syntax

SELECT UCASE(column_name) FROM table_name

===Syntax for SQL Server===

SELECT UPPER(column_name) FROM table_name

===SQL UCASE() Example===

We have the following “Persons” table:

P_Id LastName   FirstName  Address    City

1    Hansen     Ola  Timoteivn 10    Sandnes

2    Svendson   Tove Borgvn 23  Sandnes

3    Pettersen  Kari Storgt 20  Stavanger

Now we want to select the content of the “LastName” and “FirstName” columns above, and convert the “LastName” column to uppercase.

We use the following SELECT statement:

SELECT UCASE(LastName) as LastName,FirstName FROM Persons

The result-set will look like this:

LastName   FirstName

HANSEN     Ola

SVENDSON   Tove

PETTERSEN  Kari

### The ROUND() Function ###

The ROUND() function is used to round a numeric field to the number of decimals specified.

===SQL ROUND() Syntax===

SELECT ROUND(column_name,decimals) FROM table_name

Parameter  Description

column_name     Required. The field to round.

decimals   Required. Specifies the number of decimals to be returned.

SQL ROUND() Example

We have the following “Products” table:

Prod_Id    ProductName     Unit UnitPrice

1    Jarlsberg  1000 g     10.45

2    Mascarpone 1000 g     32.56

3    Gorgonzola 1000 g     15.67

Now we want to display the product name and the price rounded to the nearest integer.

We use the following SELECT statement:

SELECT ProductName, ROUND(UnitPrice,0) as UnitPrice FROM Products

The result-set will look like this:

ProductName     UnitPrice

Jarlsberg  10

Mascarpone 33

Gorgonzola 16

### The NOW() Function ###

The NOW() function returns the current system date and time.

===SQL NOW() Syntax===

SELECT NOW() FROM table_name

===SQL NOW() Example===

We have the following “Products” table:

Prod_Id    ProductName     Unit UnitPrice

1    Jarlsberg  1000 g     10.45

2    Mascarpone 1000 g     32.56

3    Gorgonzola 1000 g     15.67

Now we want to display the products and prices per today’s date.

We use the following SELECT statement:

SELECT ProductName, UnitPrice, Now() as PerDate FROM Products

The result-set will look like this:

ProductName     UnitPrice  PerDate

Jarlsberg  10.45 10/7/2008 11:25:02 AM

Mascarpone 32.56 10/7/2008 11:25:02 AM

Gorgonzola 15.67 10/7/2008 11:25:02 AM

### The FORMAT() Function ###

The FORMAT() function is used to format how a field is to be displayed.

===SQL FORMAT() Syntax===

SELECT FORMAT(column_name,format) FROM table_name

Parameter  Description

column_name     Required. The field to be formatted.

format     Required. Specifies the format.

===SQL FORMAT() Example===

We have the following “Products” table:

Prod_Id    ProductName     Unit UnitPrice

1    Jarlsberg  1000 g     10.45

2    Mascarpone 1000 g     32.56

3    Gorgonzola 1000 g     15.67

Now we want to display the products and prices per today’s date (with today’s date displayed in the following format “YYYY-MM-DD”).

We use the following SELECT statement:

SELECT ProductName, UnitPrice, FORMAT(Now(),’YYYY-MM-DD’) as PerDate

FROM Products

The result-set will look like this:

ProductName     UnitPrice  PerDate

Jarlsberg  10.45 2008-10-07

Mascarpone 32.56 2008-10-07

Gorgonzola 15.67 2008-10-07

###################################

SQL Quick Reference From W3Schools:

###################################

### AND / OR ###

SELECT column_name(s)

FROM table_name

WHERE condition

AND|OR condition

### ALTER TABLE ###

ALTER TABLE table_name

ADD column_name datatype

 

or

 

ALTER TABLE table_name

DROP COLUMN column_name

### AS (alias) ###

SELECT column_name AS column_alias

FROM table_name

 

or

 

SELECT column_name

FROM table_name  AS table_alias

### BETWEEN ###

SELECT column_name(s)

FROM table_name

WHERE column_name

BETWEEN value1 AND value2

### CREATE DATABASE ###

CREATE DATABASE database_name

### CREATE TABLE ###

 

CREATE TABLE table_name

(

column_name1 data_type,

column_name2 data_type,

column_name2 data_type,

)

### CREATE INDEX ###

CREATE INDEX index_name

ON table_name (column_name)

or

 

CREATE UNIQUE INDEX index_name

ON table_name (column_name)

### CREATE VIEW ###

CREATE VIEW view_name AS

SELECT column_name(s)

FROM table_name

WHERE condition

### DELETE ###

      DELETE FROM table_name

WHERE some_column=some_value

 

or

 

DELETE FROM table_name

(Note: Deletes the entire table!!)

 

DELETE * FROM table_name

(Note: Deletes the entire table!!)

### DROP DATABASE ###

DROP DATABASE database_name

### DROP INDEX ###  

 

DROP INDEX table_name.index_name (SQL Server)

DROP INDEX index_name ON table_name (MS Access)

DROP INDEX index_name (DB2/Oracle)

ALTER TABLE table_name

DROP INDEX index_name (MySQL)

### DROP TABLE ###

     DROP TABLE table_name

### GROUP BY ###

      SELECT column_name, aggregate_function(column_name)

FROM table_name

WHERE column_name operator value

GROUP BY column_name

### HAVING ###

SELECT column_name, aggregate_function(column_name)

FROM table_name

WHERE column_name operator value

GROUP BY column_name

HAVING aggregate_function(column_name) operator value

### IN ###

SELECT column_name(s)

FROM table_name

WHERE column_name

IN (value1,value2,..)

### INSERT INTO ###

 

INSERT INTO table_name

VALUES (value1, value2, value3,….)

 

or

 

INSERT INTO table_name

(column1, column2, column3,…)

VALUES (value1, value2, value3,….)

### INNER JOIN ###

SELECT column_name(s)

FROM table_name1

INNER JOIN table_name2

ON table_name1.column_name=table_name2.column_name

### LEFT JOIN ###

SELECT column_name(s)

FROM table_name1

LEFT JOIN table_name2

ON table_name1.column_name=table_name2.column_name

### RIGHT JOIN ###

SELECT column_name(s)

FROM table_name1

RIGHT JOIN table_name2

ON table_name1.column_name=table_name2.column_name

### FULL JOIN ###

SELECT column_name(s)

FROM table_name1

FULL JOIN table_name2

ON table_name1.column_name=table_name2.column_name

### LIKE ###

SELECT column_name(s)

FROM table_name

WHERE column_name LIKE pattern

### ORDER BY ###

SELECT column_name(s)

FROM table_name

ORDER BY column_name [ASC|DESC]

### SELECT ###

SELECT column_name(s)

FROM table_name

### SELECT * ###

 

SELECT *

FROM table_name

### SELECT DISTINCT ###

SELECT DISTINCT column_name(s)

FROM table_name

### SELECT INTO ###

SELECT *

INTO new_table_name [IN externaldatabase]

FROM old_table_name

 

or

 

SELECT column_name(s)

INTO new_table_name [IN externaldatabase]

FROM old_table_name

### SELECT TOP ###

SELECT TOP number|percent column_name(s)

FROM table_name

### TRUNCATE TABLE ###

TRUNCATE TABLE table_name

### UNION ###

SELECT column_name(s) FROM table_name1

UNION

SELECT column_name(s) FROM table_name2

### UNION ALL ###

SELECT column_name(s) FROM table_name1

UNION ALL

SELECT column_name(s) FROM table_name2

### UPDATE ###

 

UPDATE table_name

SET column1=value, column2=value,…

WHERE some_column=some_value

### WHERE ###

      SELECT column_name(s)

FROM table_name

WHERE column_name operator value

14Jan/14

LINUX CHEAT SHEET OF DOOM

Open STD IN in VIM

command | vim –

Or:

vim <(command)

Examples:

echo “stuff” | vim –

vim <(df)

It says open file is stdin (instead of filename)

NOTE: can use -R for read only: commmand | vim -R –

OR: vim -R <(command)

When you save it, you get to pick a file name

:w /some/save/location/text.txt

 

VIM: Save file by appending _BACKUP (or anyword) to current name

:w %:p_BACKUP

Note: in vim %:p means current filename (and path)

To help remember: think ‘p’ like ‘path’ and ‘%’ means ‘full’ in vim, so %:p full path

 

VIM Tricks

gg  Go to Top

G Go to Bottom (Shift-g)

dgg to delete to the top

dG to delete to the bottom

 

Remove Empty Lines

command | egrep -v “^$”

 

LINUX EXIT CODES:

Exit status 0 is good (The command completed no problem, AKA SUCCESS), anything else is an error (need to look up error in man page – the errors are like bit wise added/ORed)

command

echo “COMMANDS exit code is: $?”

Or can combine

command && echo “command worked” || echo “command didnt work exit code was $?”

 

BASH

command1 && command2 – this runs command2 only if command1 is succesfull (useful for success messages or continuing to next part of program)

command1 || command2 – this runs command2 only if command1 is failed (useful for fail messages or exiting if something happens)

command1 && command2 || command3 – this will run command1 and if it fails it will run command3, and if command1 is successful it will run command2. Note that command3 will run if command2 will fail.

Usually command2 and command3 are made such that doesnt happen, because we use “echo” commands for those, and those are always exit status 0 (success)

 

Replace every from-word to to-word

Command-with-output | sed “s/from-word/to-word/g”

echo “Your face is a from-word” | sed ‘s/from-word/to-word/g’

 

To Upper

command | tr ‘[:upper:]’ ‘[:lower:]’

tr ‘[:upper:]’ ‘[:lower:]’ < input.txt > output.txt

y=”THIS IS a TeSt”

echo “${y,,}”

dd if=input.txt of=output.txt conv=lcase

awk ‘{ print tolower($0) }’ input.txt > output.txt

perl -pe ‘$_= lc($_)’ input.txt > output.txt

sed -e ‘s/\(.*\)/\L\1/’ input.txt > output.txt

 

To Lower

command | tr ‘[:lower:]’ ‘[:upper:]’

tr ‘[:lower:]’ ‘[:upper:]’ < input.txt > output.txt

y=”this Is A test”

echo “${y^^}”

dd if=input.txt of=output.txt conv=ucase

awk ‘{ print toupper($0) }’ input.txt > output.txt

perl -pe ‘$_= uc($_)’ input.txt > output.txt

sed -e ‘s/\(.*\)/\U\1/’ input.txt > output.txt

 

LS SORT

ls -lisah (regular ls – easy to remember lisa with h, think simpsons)

ls -lisahSt (easy to remember think lisah street – S for sort and t for time – so sorts on time)

ls -lisahS (easy to remember think lisahs – like many lisas from simpsons – so it will sort because of the S by filesize or weight because so many lisas are around)

NOTE: I know I am rediculus

 

iftop best way to run

iftop -nNBP -i eth0

Then hit Shift-T

hitting T – will show totals column

n – wont resolve hostnames, N -wontresolve port names, B -everything in bytes instead of bits, P – shows the ports

Columns on the right side of iftop output/interface are then:

TOTALS,2 sec, 10 sec, 40 sec speeds

 

PS and top

ps

ps -ef –forest (remember forest is 1 r and 1 s)

ps awwfux  (think ahhh F-word)

ps aux

ps ax

Last 3 are good to see every command

top -c

top -c -n1 -b OR top -cbn1 OR top -cn1b

-c shows all arguments, n1 runs top once so good for shell & pipes, b will not print colors good for piping

Order doesn’t matter and can combine the args, just make sure after n is a 1 (run once), can put 2 but then it will run 2 times so its not good for shell and piping

 

In Depth with Head and Tail

SHOW TOP 5 LINES: head -n 5 OR head -n +5 OR head -5

SHOW BOTTOM 5 LINES: tail -n 5 OR tail -n -5 OR tail -5

OMIT TOP 5 LINES: tail -n +6 (THINK OF IT LIKE START AT SIXTH LINE AND GO TO BOTTOM – EVEN THOUGH IT DOESNT WORK LIKE THAT IN REALITY, THAT THINKING WORKS WITH IT. SO IN 7 LINE DOC YOU WILL ONLY SEE THE 6TH AND 7TH LINE – THE BOTTOM 2 LINES)

OMIT BOTTOM 5 LINES: head -n +5 (THINK OF IT LIKE, I WILL ERASE THE BOTTOM 5 LINES. SO IN 7 LINE DOC YOU WILL ONLY SEE THE TOP 2 LINES)

 

Print Columns

diff -y file1 file2

pr -m -tT file1 file2

pr  -W200 -m -tT -help file1 file2

pr  -W200 -m -tT -help <(df) <(df -h)

Note W and w do the same thing, page width (im sure there is a diff but not that we notice), if using W and w then the last one set (right most) is the one that will be read

 

Asking for file to read, but want commandline output

command <(command1) <(command2)

diff -y <(df -h) <(df)

 

BASH 3 and BASH 4 IO Redirections:

* fd0-stdin: in other word File Descriptor 0 is stdin

* fd1-stdout: in other word File Descriptor 1 is stdout

* fd2-stderr: in other word File Descriptor 2 is stderr

* Makes file 0 length, clears file or makes one (like touch)

: > filename

* Same as above: Makes file 0 length, clears file or makes one (like touch)

> filename   

* Redirect stdout to file “filename”

1>filename

* Redirect and append stdout to file “filename”

1>>filename

* Redirect stderr to file “filename”

2>filename

* Redirect and append stderr to file “filename”

2>>filename

* Redirect both stdout and stderr to file “filename”

&>filename

* This operator is now functional, as of Bash 4, final release.

In BASH3:

cmd > file.txt 2> file.txt

OR:

cmd > file.txt 2>&1

* Redirect and append stderr and stdout to file “filename”

&>>filename

In Bash3 you would have to do:

cmd >>file.txt 2>&1

UNDERSTANDING LAST ONE – COMMENT 1: Redirection statements are evaluated, as always, from left to right. “>>file” means redirect STDOUT to file, with append, short for “1>>file”. “2>&1” means redirect STDERR to “where stdout goes”. NOTE: The interpretion “redirect STDERR to STDOUT” is wrong.

UNDERSTANDING LAST ONE – COMMENT 2: It says “append output (stdout, file descriptor 1) onto file.txt and send stderr (file descriptor 2) to the same place as fd1”

 

Pipes (only redirect STDOUT of one process to STDIN of another – so STDERR goes away)

Redirect STDOUT and STDERR to the pipe (they will both turn into STDIN):

# command 2>&1 | command1

# command &| command1

* So STDERR is not piped thru to next command. Here is an example showing how to combine them (first it shows you the problem how stderr is not piped):

* First we need to generate some output that has text in both stdout (fd1) and stderr (fd2): # { echo “stdout”; echo “stderr” 1>&2; }

This redirects stderr (file descriptor 2) to stdout (file descriptor 1), e.g.:

# { echo “stdout123”; echo “stderr123” 1>&2; } | grep -v std

OUTPUT: stderr123

EXPLANATION: “stdout123” goes to stdout (fd1), “stderr123” goes to stderr (fd2). grep only sees “stdout123” (and we tell grep to not write anything that has “std” in it), hence “stderr123” prints to the terminal.

* To combine stdout (fd1) and stderr (fd2) you would redirect stderr (fd2) to stdout (fd1) using 2>&1.

On the other hand:

# { echo “stdout123”; echo “stderr123” 1>&2; } 2>&1 | grep -v std

OUTPUT is blank (meaning that both stdout123 and stderr123 were piped to grep). stdout123 was in fd1 (stdout) and stderr123 was in fd2 (stderr) then they got combined (by merging/redirecting fd2 to fd1) then piped to grep and a pipe only sends fd1 (stdout) so grep saw both output.

After writing to both stdout and stderr, 2>&1 redirects stderr back to stdout and grep sees both strings on stdin, thus filters out both.

 

Command Substitution

* User output of ls as arguments of echo:

echo $(ls)

OUTPUT: total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email

* Same thing, Why not simply use:

# echo ls -lrt

OUTPUT: total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email

* Or much safer – use output of ls as only the 1st argument of echo (useful with other commands other then echo):

# echo “ls -lrt

OUTPUT:

total 20

-rw-r–r– 1 root root 11153 Jan  4 12:07 everyl3

-rw-r–r– 1 root root  1667 Jan  4 12:33 everyl3org

-rw-r–r– 1 root root   781 Jan  4 12:37 everyl3email

* Or much safer – quotes are printed around output:

# echo \”ls -lrt\”

OUTPUT: “total 20 -rw-r–r– 1 root root 11153 Jan 4 12:07 everyl3 -rw-r–r– 1 root root 1667 Jan 4 12:33 everyl3org -rw-r–r– 1 root root 781 Jan 4 12:37 everyl3email”

 

FIND EXEC – run command on a found file

You can use XARGS, or you can use finds own -exec argument. The syntax is:

find [find options] -exec command1 {} \;

find . -type f -exec echo “THIS FILE: {}” \;

find . -type f -exec ls -i {} \;

Everywhere you see a {} will be where the filename goes. This command1 is ran once per found file.

Every exec statement ends with a \;. It tells where the command ends. Since ; ends full commands with bash, we need to tell it to escape ; with \;, thus -exec knows that the exec command is done (and not the whole find command as a whole is done)

You can use whatever find options you need, put the exec at the end.

Dont combine exec with xargs (the output of the exec command will be used as the arguments of the xargs, which we dont want – command outputs could be huge – and we just want the filename in xargs). If you need to use the output of one command on another then just use 2 xargs (or if you want pain you could use find . -exec comm1 {} \; -print0 | xargs -0 -I {} nextcommand {})

Final note, you dont need to use {}:

find . -type f -exec date \;

That will list the date as many times as it finds a file.

 

MULTIPLE FIND EXECS – run multiple commands on a found file

find [find options] -exec command1 {} \; -exec command2 {} \;

THIS WILL LIST THE FILE NAME AND TIME IT FOUND IT AT:

find [find options] -exec echo “FOUND THIS {} FILE AT:” \; -exec date \;

THIS WILL LIST FILE AND THEN DELETE IT (there is better option then this with xargs that will show result of delete, this is in the xargs section:

find [find options] -exec echo “DELETEING: {}” \; -exec rm -rf {} \;

This doesnt use the output of command1 into command2. This, as it looks, runs 2 commands. Since we cant tie commands like this “-exec command1 && command2 \;”

 

XARGS

NOTE: this shows that you should test every command (use echos instead of rms or cps or mvs) before running the command that will touch data

Use output of one command as the arguments of the next command (next command after the pipe)

SIMPLE EXAMPLE:

# command1

OUTPUT:

thing1

thing2

thing3

thing4

THEN RUNNING:

# command1 | xargs command2

Is the equivalent of running these:

command2 thing1

command2 thing2

command2 thing3

command2 thing4

Xargs will run the listed command with the given arguments (the output of the previous command thats piped to xargs) as many times as the first command has lines. Each line becomes an argument for xargs.

XARGS is used with find alot, find lists file names one by one, and then xargs can run commands on them one by one (more efficient the bulk operations – more later)

IF USING WITH FIND USE -print0 ON FIND and -0 ON XARGS – TREATS SPACES & SPECIAL CHARS – OR ELSE ERRORS

“-0” If there are blank spaces or characters (including newlines) many commands will not work. This option take cares of file names with blank space.

“-print0” in find does the same thing -0 does in xargs

PROBLEM and ERROR IF YOU RUN LIKE THIS: find -type f -iname “*core*” | xargs ls -lisah

NO PROBLEMS:

find -type f -name “*core*” -print0 | xargs -0 ls -lisah

 

XARGS PUTS ARGUMENT AT THE END – WHAT IF NEED IT ELSE WHERE or REUSE IT

-I occurances of input arguments go where you tell them. First specify what the replace-char is, it can be more then 1 char, like here its {}. So we specified that -I {}, so everywhere you see {} thats where the command arguments go instead of the end.

EXAMPLE1:

# find . -print0 | xargs -0 -I {} echo THIS FILE {} IS EQAUL TO THIS FILE {}

EXAMPLE2:

# find . -name “*.bak” -print0 | xargs -0 -I {} mv {} ~/old.files

{} as the argument list marker

 

NOTE SOMETIMES ITS BETTER TO USE -I {} THEN TO LET THE ARGUMENT FALL AT THE END

DOESNT WORK – BAD:

# find . -type f -print0 | xargs -0 echo “WOW”

WOW ./everyl3 ./everyl3org ./everyl3email

WORKS – GOOD:

# find . -type f -print0 | xargs -0 -I {} echo “WOW {}”

WOW ./everyl3

WOW ./everyl3org

WOW ./everyl3email

# find . -type f -print0 | xargs -0 -I {} echo “WOW” {}

WOW ./everyl3

WOW ./everyl3org

WOW ./everyl3email

 

XARGS multiple commands:

cat a.txt | xargs -I % sh -c ‘command1; command2;’

NOTE: Use as many commands as you want, we needed to call upon /bin/sh or /bin/bash would work too to use multiple commands -c argument in sh allows you to enter commands on the shell

NOTE: You can use % for the arguments in the commands, everywhere there is % thats where a.txt output will act as arguments. Its the same as the -I {} in previous examples, -I %. So the effect {} had is the same as %.

NOTE: This is a useless use of cat, xargs can take arguments from STDIN (you can also put STDIN in the front – or the back of a command it doesnt matter):

< a.txt xargs -I % sh -c ‘command1; command2; …’

NOTE: AGAIN Yes, the redirection can be at the beginning of the command.

 

 

XARGS BULK OPERATION: Remember I said its more efficient on certain bulk operations

XARGS FIXING OVERLOADING EXAMPLE1: Avoiding errors and resource hungry problems with xargs and find combo on a mass delete

Imagine your deleting millions of files big and small that are all sitting in the directory /sucks/

NOTE: can use any of the methods with the *, my favorite are the bottom to (they delete one by one so no memory or overloading issues) and they list deleted files (and if its failed or successful when delete finished)

NOTE: The none xargs methods are tabbed

* Well you could delete em like this:

# rm -rf /sucks/

* However the system could freeze and you wont know when you delete a file, an alternative is to use xargs with find (or just find by itself)

# find /sucks –delete

# find /sucks -exec rm -rf {} \;

* Have it tell you what file is deleting:

# find /sucks -exec echo “DELETING: {}” \; -exec rm -rf {} \;

* OR first files then empty dirs & everything else:

# find /sucks -type f -exec rm -rf {} \;

# find /sucks -exec rm -rf {} \;

* Or do the same thing and tell you what file its deleteing:

# find /sucks  -type f -exec echo “DELETING FILE: {}” \; -exec rm -rf {} \;

# find /sucks -exec echo “DELETING: {}” \; -exec rm -rf {} \;

* Or with XARGS:

# find /sucks -print0 | xargs -0 rm -rf

* Or with Xargs list and delete file:

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {}’

* Or with XARGS files first, then everything else:

# find /sucks -type f -print0 | xargs -0 rm -rf

# find /sucks -print0 | xargs -0 rm -rf

* Delete with XARGS and list file before delete and result of delete – using multiple Xargs:

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

* Delete with XARGS and list file before delete and result of delete – using multiple Xargs – first delete files and then everything else:

find /sucks -type f -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING FILE {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

find /sucks -print0 | xargs -0 -I {} sh -c ‘echo -n “DELETING {}:”; rm -rf {} && echo ” SUCCESS” || echo ” FAILED WITH EXIT CODE $?”‘

XARGS FIXING OVERLOADING EXAMPLE2: Avoiding errors and resource hungry problems with xargs and find combo on a mass copy

To copy all media files to another location called /bakup/iscsi, you can use cp as follows:

# cp -r -v -p /share/media/mp3/ /backup/iscsi/mp3

However, cp command may fail if an error occurs such as if the number of files is too large for the cp command to handle. xargs in combination with find can handle such operation nicely. xargs is more resource efficient and will not halt with an error:

# find /share/media/mp3/ -type f -name “*.mp3” -print0 | xargs -0 -r -I file cp -v -p file –target-directory=/bakup/iscsi/mp3

 

Duplicates

Show Duplicates (output will be sorted):

command | sort | uniq -d

Remove Duplicates (output will be sorted):

command | sort | uniq

Remove Duplicates (not sorted – order preserved):

command | awk ‘!x[$0]++’

This command is simply telling awk which lines to print. The variable $0 holds the entire contents of a line and square brackets are array access. So, for each line of the file, the node of the array named x is incremented and the line printed if the content of that node was not (!) previously set.

 

While adding drives watch dmesg and cat /proc/partitions

If you have mdev and no watch command (those go hand in hand :.):

while true; do echo “—–“date“—-“; mdev -s; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have mdev and you do have watch:

while true; do echo “—–“date“—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have the better udev (basically its an automatic mdev that doesn’t need to be called) and you have watch:

while true; do echo “—–“date“—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

If you have udev and no watch:

while true; do echo “—–“date“—-“; (dmesg | tail -n 10; cat /proc/partitions | egrep -v “[0123456789]$”; ) | egrep “^[^$#]”; sleep 3; done;

 

How to Pipe STDERR and not STDOUT

Note typically its STDOUT which gets piped.

So we tell STDOUT to go to hell. And Tell STDERR to go to where STDOUT usually goes

command1 2>&1 > /dev/null | command2

Now command2 will receive command1 stderr

Normal operations are like this:

command1 | command2

Here command2 recieves only command1s stdout (not stderr)

 

Bash Time Diff Script

date11=$(date +”%s”)

<Do something>

date22=$(date +”%s”)

diff11=$(($date2-$date1))

echo “$(($diff11 / 60)) minutes and $(($diff11 % 60)) seconds elapsed.”

 

 

ifstat

Interface bandwidth continous scrolling output

(STEP 0) INSTALL

apt-get install ifstat

(STEP 1) COMMAND LINE

ifstat -bat

-b bytes instead of bits

-a every interface

-t time stamps

 

BYOBU (some will work in screen)

CONTROL-a %  (in other words CONTROL-a SHIFT-5) – splits current region vertically and makes a new bash session in it and puts you into the new region

CONTROL-a |  (in other words CONTROL-a SHIFT-\ ) – splits horizonatlly and makes a new bash session in it and puts you into the new region (in old version this splits | )

CONTROL-a S  (in old version this splits —————)

CONTROL-a TAB  move to next region

CONTROL-a “  select session with arrow keys and enter goes there

CONTROL-a X   closes sessions

CONTROL-a c  creates a new session (new tab) (note a tab can have multiple sessions split in either way)

CONTROL-a A   thats a capital a (in other words CONTROL-a SHIFT-a) name a tab

CONTROL-a ?  to see help, ENTER TO QUIT out of help

CONTROL-a {  (control -a shift-[) moves current window to another window

CONTROL-a [  copy mode move around with keys scroll essentially this way <– – ENTER without CONTROL a to get out of copy mode — need to repeat full key command to do again

CONTROL-a ]  copy mode move around with keys scroll essentially this way –>- ENTER without CONTROL a to get out of copy mode — need to repeat full key command to do again

 

SCREEN (some will work in BYOBU)

CONTROL-A then c – create new shell

CONTROL-A then S – split window with horizontal line —

CONTROL-A then | – split window with vertical line |

CONTROL-A then TAB – move over to next window

CONTROL-A then ” – see all windows

CONTROL-A then [ – copy mode (up down left right, page up page down move around in window)

CONTROL-A then X – close window

CONTROL-A then D – detach (processes still are running)

 

DNS with dig and nslookup

Find this                              with DIG                                               with NSLookup

Generic Syntax                    dig @server type name                          nslookup -q=type name server

A records for somehost         dig somehost.example.com                 nslookup somehost.example.com

MX records for somehost         dig mx somehost.example.com         nslookup -q=mx somehost.example.com

SOA records for somehost         dig soa somehost.example.com        nslookup -q=soa somehost.example.com

PTR records for a.b.c.d         dig soa somehost.example.com        nslookup -q=soa somehost.example.com

Any records for somehost         dig any somehost.example.com         nslookup -q=any somehost.example.com

Same from server2         dig @server2 any somehost.example.com         nslookup -q=any somehost.example.com

 

WPA and WPA2 quick connect

SHORT WAY:

wpa_supplicant -B -i [int] -c <(wpa_passphrase [essid] [passphrase])

LONG WAY:

wpa_passphrase [ssidname] [ssidpassword] > /etc/wpa_supplicatnt/wpa_supplicant.conf

TO CONNECT:

wpa_supplicant -B -i [int] -c /etc/wpa_supplicant/wpa_supplicant.conf

 

Capture Network Traffic At Box1 And Send to Box2 (Current Shell is in Box1)

Box1: localhost, Box2: forge.remotehost.com, Interface name is ppp1 but typical and common is eth0

NOT COMPRESSED AT DESTINATION

tcpdump -i ppp1 -w –  |  ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 “cat – > /tmp/result.pcap.gz”

COMPRESS TO GZIP AT DESTINATION

tcpdump -i ppp1 -w –  |  ssh forge.remotehost.com -c arcfour,blowfish-cbc -C -p 50005 “cat – | gzip > /tmp/result.pcap.gz”

NOTE: selected fastest(crappiest) encryption with “-c arcfour,blowfish-cbc” and compress with “-C” so that ssh and gzip can keep up with capture. In reality it probably will over buffer.

YOU CANNOT AUTOMATE THESE tcpdump WITH & UNLESS YOU USE SSH KEYS (INSTEAD OF PASSWORDS)

NOTE: After you ungzip, You might need to strip top line as its a header (containig the word gzip or some other garbage)

gzip -d ppp3-to-danny.pcap.gz

tail -n +2 /tmp/ppp3-to-danny.pcap.gz > /tmp/ppp3-to-danny1.pcap.gz

 

PUT THIS IN START SCRIPT LIKE .bash_profile or .bashrc TO GET BEST HISTORY

shopt -s histappend

HISTFILESIZE=1000000

HISTSIZE=1000000

HISTCONTROL=ignoredups

HISTTIMEFORMAT=’%F %T ‘

shopt -s cmdhist

PROMPT_COMMAND=’history -a’

 

REMOVE ALL HISTORY

unset PROMPT_COMMAND

rm -f $HISTFILE

unset HISTFILE

 

LOOKING AT EVERY DRIVES (SD or HD) STATS

apt-get install smartmontools

for i in /dev/[sh]d[abcdefghijklmnopqrstuvwxyz]; do echo “===DRIVE: $i===”; smartctl -a $i | egrep -i “serial|model|capacity|reallocated_sec|ata error|power_on”; done;

NOTE: drives keep stats automatically, can do tests while drives running, can also slow down drive to do test, can also stop drive to do other tests – all in man page of smartctl

 

SIMPLE WHILE LOOP

while true; do COMMANDS; done

while true; do cat /proc/mdstat; usleep 1000000; done

while true; do date; cat /proc/mdstat; sleep 10; done

NOTE ON UNITS: Usleep in microseconds (1 microseconds million is 1 second), Sleep in seconds

With usleep – since in microseconds – “usleep 1000000” same as “sleep 1”

microsecond can be written as us or μs (The correct greek format)

 

QUICK RESTARTABLE RSYNC SCRIPT:

#!/bin/bash

# If rsync fails its exit code is not 0 so it restarts back at the loop

# If exit code is 0 then rsync will stop the script

while [ 1 ]

do

killall rsync

rsync -av –progress –stats –human-readable /c /mnt/dest/nasbackup/

if [ $? = 0 ] ; then

echo

echo “#########################”

echo

echo “RSYNC SUCCESSFULL”

exit

else

echo

echo “#########################”

echo

echo “RSYNC FAILED RESTARTING IN 180 SECONDS”

echo

sleep 180

fi

done

 

CAT PV and SSH to Transfer Files:

NOTE: Imagine 50505 is the SSH port instead of the regular 22 (just showing it incase you use another port) – Big P for SCP, little p for SSH

NOTE: If you use SSH you need to specify the filename as it will save on destination, with SCP thats optional. With SCP you can just tell it the folder to dump to.

scp -P 50505 source.txt username@destinationserver:/filedst/

OR CAN RENAME AS YOU SAVE: scp -p 50505 source.txt username@destinationserver:/filedst/dest.txt

cat file | ssh -p SSHPORT username@destinationserver “cat – > /filedst/file”

 

 

TAR PV and SSH to Transfer folders

* WITH COMPRESSION/DECOMPRESSION:

VIA SSH WITHOUT PV: # cd /srcfolder; tar -czf – . | ssh -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -czf – . | pv -s du -sb . | awk '{print $1}' | ssh -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -czf – . | pv -s du -sb . | awk '{print $1}' | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xzvf – -C /dstfolder”

NOTE: Can implement -C on the ssh however, there will be no benefit logically speaking since we already did all the possible compressions at the tar level

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

* WITHOUT COMPRESSIONS/DECOMPRESSION:

VIA SSH WITHOUT PV: # cd /srcfolder;  tar -cf – . | ssh -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

* WITHOUT COMPRESSIONS/DECOMPRESSION FROM SSH:

VIA SSH WITHOUT PV: # cd /srcfolder;  tar -cf – . | ssh -C -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -C -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

VIA SSH WITH PV – gives progress bar & FASTEST SSH ENCRYPTION: # cd /srcfolder;  tar -cf – . | pv -s du -sb . | awk '{print $1}' | ssh -C -c arcfour,blowfish-cbc -p 50005 root@destination.com “tar -xvf – -C /dstfolder”

NOTE: Also note that this way the destination will have the following folder structure /dstfolder

 

CHROOTING – The mounts before

FIRST MOUNT THE newroot:

mount /dev/sda1 /newroot

HERE ARE THE SYSTEM MOUNTS:

mount -t proc none /newroot/proc

mount -o bind /dev /newroot/dev

mount -t devpts none /newroot/dev/pts

mount -t sysfs none /newroot/sys

SOMETIMES YOU WILL WANT /run:

mkdir /run

mount -t tmpfs tmpfs /run

mount –bind /run /newroot/run

CHROOT TIME (and run bash shell /bin/bash instead of the default oldshell /bin/sh):

chroot /newroot /bin/bash

WHEN YOU EXIT OUT OF CHROOT

umount /newroot/run

umount /newroot/sys

umount /newroot/dev/pts

umount /newroot/dev

umount /newroot/proc

umount /newroot

 

BTRFS MOUNTS ANALYSIS QUICK SCRIPT

Just copy paste it and run it or make it into a script:

#!/bin/bash

echo “=================”

echo mounts

echo “=================”

echo

mount

echo

echo “========================================”

echo “The following BTRFS volumes are mounted”

echo “========================================”

btrfs filesystem show –all-devices

echo

echo “===OR SIMPLY:===”

btrfs filesystem show –all-devices | egrep “/dev/” | awk ‘{print \$8}’

echo

for i in btrfs filesystem show --all-devices | egrep "/dev/" | awk '{print \$8}'

do

echo “—\$i is mounted—“

echo df | egrep \$i

df | egrep \$i

echo

done

echo “==========================”

echo “btrfs filesystem df <path>”

echo “==========================”

echo

# THIS PART IS AWESOME 1 START

for i in btrfs filesystem show --all-devices | egrep "/dev/" | awk '{print \$8}'

do

echo “===FILESYSTEM DFs FOR \$i===”

echo

df | egrep \$i

for z in df | egrep \$i | awk '{print \$6}'

do

echo

echo “—btrfs filesystem df \$z–“

echo

btrfs filesystem df \$z

echo

done

done

# THIS PART IS AWESOME 1 END

echo “==============================”

echo “btrfs subvolume list -a <path>”

echo “==============================”

# THIS PART IS AWESOME 2 START

for i in btrfs filesystem show --all-devices | egrep "/dev/" | awk '{print \$8}'

do

echo “===SUBVOLUMES FOR \$i===”

echo

df | egrep \$i

for z in df | egrep \$i | awk '{print \$6}'

do

echo

echo “—btrfs subvolume list -a \$z–“

echo

btrfs subvolume list -a \$z

echo

done

done

# THIS PART IS AWESOME 2 END

echo “=================”

 

TO SELECT FILES WITHIN DATE RANGE:

* TO SELECT A RANGE:

touch –date “2007-01-01” /tmp/start

touch –date “2008-01-01” /tmp/end

find /data/images -type f -newer /tmp/start -not -newer /tmp/end

 

SUM UP DATA SIZE BY DAY:

* FOR CURRENT FOLDER:

find . -type f -print0 | xargs -0 ls -l –time-style=long-iso | awk ‘{sum[$6]+= $5}END{for (s in sum){print sum[s],s;}}’ | sort -k2 | column -t

* FOR CURRENT FOLDER BUT A CERTAIN DATE RANGE – established above – AND NOT INCLUDING A CERTAIN FILE:

find . -type f -newer tmpstart -not -newer tmpend -not -name “Folder.cfg” -print0 | xargs -0 ls -l –time-style=long-iso | awk ‘{sum[$6]+= $5}END{for (s in sum){print sum[s],s;}}’ | sort -k2 | column -t

 

SUM UP DATA THAT IS SELECTED:

* FOR CURRENT FOLDER:

find . -type f -ls | awk ‘{total += $7} END {print total}’

* FOR CURRENT FOLDER BUT A CERTAIN DATE RANGE – established above – AND NOT INCLUDING A CERTAIN FILE:

find . -type f -newer tmpstart -not -newer tmpend -not -name “Folder.cfg” -ls | awk ‘{total += $7} END {print total}’

 

AWESOME SCRIPT TO COUNT UP FILES BY EXTENSION:

find . -type f 2>/dev/null | sed ‘s|^\./\([^/]*\)/|\1/|; s|/.*/|/|; s|/.*.\.| |p; d’ | sort | uniq -ic

BIG VERSION:

find . -type f 2>/dev/null \

    | sed ‘s|^\./\([^/]*\)/|\1/|; s|/.*/|/|; s|/.*.\.| |p; d’ \

    | sort | uniq -ic \

    | sort -b -k2,2 -k1,1rn \

    | awk ‘

BEGIN{

    sep = “+——-+——+——–+”

    print sep “\n| count | ext  | folder |\n” sep

}

{ printf(“| %5d | %-4s | %-6s |\n”, $1, $3, $2) }

END{ print sep }’

 

DELETING EVERYTHING IN CERTAIN FOLDER

FIRST MAKE SURE YOUR IN THE RIGHT FOLDER: cd /folder_which_will_have_everything_in_it_deleted

Deleting with the following command:

# rm -rf *

This might fail if you have to many files in the folder

It will say “fail too many arguments” or something like that

Here is an option to delete all the files

# find . -type f -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

For every file it lists it, deletes it, and tells you DELETED after

To delete everything not just files

# find . -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

Or maybe do it like this files first and then directories and everything else

# find . -type f -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

# find . -exec echo -n {} \;  -exec rm -rf {} \; -exec echo ” DELETED” \;

 

MORE INFO ON SCSI DEVICES

apt-get install lsscsi

lsscsi -sgdlp

 

MDADM RAID DEFAULT SPEED LIMIT MAX AND MIN (in case you changed them)

echo 200000 > /proc/sys/dev/raid/speed_limit_max

echo 1000 > /proc/sys/dev/raid/speed_limit_min

 

ZFS – Checking Arc Stats

Make sure you have the following package: sunwmdb package, which will enable dynamic reading of ARC statistics:

If you have solaris with debian:

apt-get update

apt-get install sunwmdb

To check Arc:

echo “::arc” | mdb –k

 

ZFS – To Set Arc Meta Limit to bigger value:

Need to have mdb (from package sunwmdb)

8 gig: 0x200000000 = 8 GB exactly (8 gibibytes according to wolfram and 8.59 gigabytes, 8 gigabytes according to google)

echo arc_meta_limit/Z 0x200000000 | mdb -kw

9 gig: 0x240000000 = 9 GB exactly (9 gibibytes according to wolfram and 9.664 gigabytes, 9 gigabytes according to google) (9 gibibytes and 9.664 gigabytes accoring to wolfram

echo arc_meta_limit/Z 0x240000000 | mdb -kw

10 gig: 0x280000000 = 10 GB exactly (10 gibibytes according to wolfram and 10.74 gigabytes, 10 gigabytes according to google)

0x280000000 = 10 GB exactly (10 gibibytes according to wolfram and 10.74 gigabytes, 10 gigabytes according to google)

echo arc_meta_limit/Z 0x280000000 | mdb -kw

13.5 gig: 0x360000000 = 13.5 GB exactly (14.5 gibibytes according to wolfram and 14.5 gigabytes, 13.5 gigabytes according to google)

echo arc_meta_limit/Z 0x360000000 | mdb -kw

 

Tar All Logs and Send to FTP Server

TAR ALL LOGS INTO A FILE IN TMP THAT WILL HAVE DATE:

tar -zcvf /tmp/all-logs-date +%F-%T | tr ":" "-" /etc /var/log

FTP SYNTAX: NOTE: can use other methods to transfer not just ftp (rsync, pv, cat, tar, scp, ssh, gzip)

ncftpput -u username ftpiporhostname remotelocation localfile

Remotelocation has to be a folder location (that exists, well / exists always and thats where I will dump)

EXAMPLE:

ncftpput -u bhbmaster ftp.drivehq.com / both.tar.gz

ncftpput -u bhbmaster 66.220.9.50 / both.tar.gz

 

COPYING PARTITION TABLES BETWEEN DRIVES (sfdisk for MBR and sgdisk for GPT)

MBR – sfdisk

To backup an MBR partition table using ‘sfdisk’: # sfdisk -d /dev/sda > sda.table

​To restore an MBR partition table from backup using ‘sfdisk’: # sfdisk /dev/sda < sda.table

Clone with the backup file: # sfdisk /dev/sdb < sda.table

Clone partition from SDA to SDB (copy from SDA to SDB): # sfdisk -d /dev/sda | sfdisk /dev/sdb

Confirm by listing(printing) partition table of source: # sfdisk -l /dev/sda

Confirm by listing(printing) partition table of destination: # sfdisk -l /dev/sda

NOTE: source and destination partition tables should match after clone (obviously)

NOTE: sfdisk -d is for dump, -l is for list

GPT – sgdisk

To backup a GPT partition table using ‘sgdisk’: # sgdisk -b sdX.gpt /dev/sdX

​To restore a GPT partition table from a backup file using ‘sfdisk’: # sgdisk -l sdX.gpt /dev/sdX

To clone a partition table from one drive to another using ‘sgdisk’:# sgdisk -R=Destination Source

NOTE: note the syntax is destination is first (not source) unlike the common other way where source is first. So keep that in mind and dont mess up the commnd

NOTE: sometimes that command doesnt go through so try with and without =, and consider the space (sometimes its best not to include it)

Other likeable forms:

# sgdisk -R=/dev/sdb /dev/sda

# sgdisk -R/dev/sdb /dev/sda

​After cloning GPT tables you will need to randomize the GUID of the destination:​

# sgdisk -G /dev/sdb

Confirm by listing(printing) partition table of source: # sgdisk -p /dev/sda

Confirm by listing(printing) partition table of destination: # sgdisk -p /dev/sdb

NOTE: -R is for replicate (also known as copy or clone), -G is for GUID or randomizing GUID, -p is for print

14Jan/14

GOOD RESOURCES and LINKS

The purpose of this page is to give you some handy resources that I thought were  very useful and still currently do. I go to these sites often, whether im Troubleshooting, programming, or trying to learn something new.
Programming

With so many tutorials out there for programming, its sometimes hard to find the right one for you. Well it took a while but eventually I found the right one for me.

The new boston is great, its a group of people doing tutorials on specific language. They sound young, but dont let that fool you. Each video is just the right amount of time. They are not redudand so everything is covered once and beautifully. I pretty much mass youtube download their whole site and watch the videos.
Networking
Eli the computer guy on youtube, his channel is amazing. He covers the general information. Although redundant he has a good way to make you remember everything he ever said. Props to Eli the computer guy.
Netgear
For product datasheets and manuals and new firmware I prefer to go to support.netgear.com. Hit the Business button and type in the name of your product in the search bar. Also, If you have a home device click Home products and type in the model of your device in the search bar.
There are a lot of great insights here. And if it wasn’t for this site I wouldn’t of learned all that I know with Netgear.
Another great Netgear link is simply www.netgear.com… every product is listed here and it has a accurate and useful product spec page for every device and also related product. So if you have a module and your looking for a good cable that Netgear might make, well then find the module and go to the Related Products tab.
For storage products like the READYNAS and the new READYDATA, go to www.readynas.com
For the intrusion preventing firewalls(UTMs) and switchs(STMs) go to www.prosecute.com
Either way they will all be listed on netgear.com and support.netgear.com
Other Programs I like and you should download
1. First of all go to ninite.com – Not alot of people know about this site but its a mass program “downloader”perfect for bringing your fresh formatted system up with the newest most useful apps out there.
2. I recommend for everyone to use Keepass – Its a password safe. Its great and its safe. Noone will hack it. If you want to  be extra sure make everything accessed with a certificate file or key file.
3. I also recommend the program called Fences – Im a clean and organization OCD freak so everything must be nicely arranged. All my desktop items sit in these light green opaque boxes. The are like panels for icons, if my “fence”/panel was only big enough for 2 icons and I dropped 10 icons in there then it would put a little scroll bar in it, which again sits on the desktop. Everything is beautifully arranged. Lets say I get sick of seeing all the icons, then I can double click on the desktop and all of the sudden everything disappears, not a glitch, just double click again to make everything reappear like osme programming magic or something.
4. Evernote – with so much info out there on the technological internet web and how its reaching out to hand held device(or should I just say reached, we are at the brink of a new amazing era and I love being part of it) — anyways– its just hard to remember everything — our little human brains are getting too small for all this — i bet our kids will micro-evolve to have more brain hard-drive just to sustain all this information in this new age — anyways this program is an on the go always synced and password protected source to your e-memory – now all my memory isnt just held in the electrical impulses that go of in my head but also on the electrical storage interface we call the web.
OneNOTE
Best app to store notes, You can put the notes anywhere. All notes have sections they go in. I just have 1 Notebook I store everything in. That notebook can be shared with my peers. I can set password on sections (that hold pages) so they cant see into them unless they know the password. The passwords actually encrypt the sections. Everything you see on my infotinks also exists on my onenote Cool
Other Informative Websites
w3schools.com — have an excellent guide on HTML, CSS, XML, Javascript, SQL, and much more.
github.com — an excellent version control system
 
Another good app – adds more options to title bar right click
14Jan/14

Good Windows Maintenance Programs

CCLEANER
########
MY ARTICLE ON THIS, URL: http://www.infotinks.com/windows—how-to-clean-it-up—the-real-way
Good Crap Cleaner file (Anaylze then run) and registry (Run, save backup .reg file for safe keeping, then delete all finds)
PC Decrapifier
##############
MY ARTICLE ON THIS, URL: http://www.infotinks.com/windows—how-to-clean-it-up—the-real-way
http://pcdecrapifier.com/
Can uninstall multiple things at once, good for removing alot of adaware at once
Folder Size
###########
* Calculates folder sizes in background and can have them displayed along side explorer
Speccy
######
* Best computer specs goes good with CPU-Z and GPU-Z
Defraggler
##########
A good defrag progam.
I have 3 disks
Analyze to get framentation, then can run Benchmark Drive to get RANDOM READ SPEED (RRS)
PRE DEFRAG
==========
C: ssd, so doesnt recommend defrag as life will be shorter, 18% fragmentation, RRS: 61.01, freespace: 40.1gb (36%)
D: hdd, 30% fragmentation, freespace: 553.1 gb (57%), RRS: 1.96 MB/s, schedule: Tuesday @ 4 AM: command: (C:\Program Files\Defraggler\df64.exe “D:” /ts /user “Kostia” /appPath “C:\Program Files\Defraggler”)
E: hdd, 20% fragmentation, freespace: 943.2 gb (25%), RRS: 1.25 MB/s, schedule: First Wednesday of every Month @ 2AM: command: (C:\Program Files\Defraggler\df64.exe “E:” /ts /user “Kostia” /appPath “C:\Program Files\Defraggler”)
* Note schedules are setup from program and then to view or edit that schedule you must delete it from WINDOWS TASK SCHEDULER and setup a new one from the program
POST DEFRAG
===========
D: (took overnight) Defrag complete, fragmentation 0%, freespace: same 533.1 GB (57%), RRS: 2.74 MB/s, 2nd test: 2.77 MB/s
E: (took 3 days) Defrag complete, fragmentation 0%, freespace: a little less 912 GB (because I put data on it), RRS: 1.81 MB/s