HOW I SETUP MULTINODE CLUSTER WITH 3 READYNAS
#############################################
#############################################
DATE: July 16th 2013
I made this article as an intro to hadoop, and also provided a bunch of great resources in the links section. I show how to implement a multinode cluster across 3 debian server (readynas servers) and also to make this a quick stop for lots of your questions and basic needs I included the list of all of the properties at the end in the appendix (that takes up about5000 lines).
Looks too length?
You honestly dont have to read all 6000 lines to become good, just read the first 1333 lines (its not like novel lines, they are simple short lines) Plus I jump into lots of topics that others dont. Like backup and filesystem analysis.
ABOUT CITATION
###############
Many thanks to michael-noll.com for this as without it I wouldnt of set this up as quick… or maybe I would have who knows… I quote a few things on here from the following 3 links:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
At end of the article in the appendix I go into every property, thats why this article is so long:
The properties – the default values:
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml
TABLE OF CONTENTS:
#################
* ABOUT CITATION
* TABLE OF CONTENTS
* INTRO
– – PreReqs
* SETUP
– – Lets setup slaves first (will312 and rmstest2)
– – ON MASTER (knas312)
– – ABOUT THE PROPERTIES
* CONFIRM
* FORMAT THE FILESYSTEM
* NOW TIME TO LAUNCH THE SYSTEM
* NOTE ABOUT FREE SPACE
* LETS START MAPREDUCE
* RECAP OF RESPONSIBILITES
* THE WEB INTERFACES
* USING THE SYSTEM
– – GETTING FILES OVER TO THE SYSTEM (HDFS)
– – RUNNING A JAVA MAPREDUCE PROCESS (MAPREDUCE)
– – IN PYTHON?
– – RUNNING A PYTHON MAPREDUCE PROCESS (MAPREDUCE) – SIMPLE
– – RUNNING A PYTHON MAPREDUCE PROCESS (MAPREDUCE) – IMPROVED
* HOW TO STOP HADOOP
– – MY WAY
– – AUTHORS WAY
* HOW TO BACKUP SETTINGS
* HOW TO PREPARE FOR UPGRADE
* IMPORTANT FINAL NOTES THAT DIDNT GET A CHANCE TO SAY YET
* THE END
* APPENDIX
* BEST PRACTICE AND GOOD EXERPTS ON HADOOP IN USE
* GOOD LINKS
* ALL OF THE CONFIGS PRESENTED AGAIN – STRAIGHT FROM THE BACKUP SCRIPT ABOVE
Everything below Line # 1333 is this section on properties which takes up 5k lines.
* EVERY PROPERTY FROM THE API PAGE FOR EASY LOOK UP (VERY LONG OPPS) – AND ITS DEFAULT VALUE
– – yarn-default.xml
– – mapred-default.xml
– – hdfs-default.xml
– – core-default.xml
– – DEPRECATED VALUES AS OF JULY 16th 2013
INTRO
#####
Hadoop: implements distributed filesystem (file system on each box combine for one total big filesystem, and sits ontop of another filesystem), also distributed computing (compute jobs are broken down to 2 tasks map and reduce which is how parallization works efficiently – analogy: your tasked with counting up population in your country, instead of sending one person/or your self to count each person individually, you send one person, “map task”, to count up the population in each city, “node in the cluster”, and then each person finds the population there, “map output”, then the results of every person come back together & organize the results if you need to before summing them up, “shuffle task” – as you can see its optional in most cases, and sum up for the end result, “reduce”)
We are going to make HADOOP cluster with 3 debian box. One will be the master and slave and the other 2 will be slaves.
Master meaning it holds the metadata and it delegates out the parallization tasks
pc1 – IP: 10.1.20.54 – hostname: knas312 – this is our master, and will delegate the tasks and it also holds the metadata for all of the data
pc2 – IP: 10.1.20.64 – hostname: will312 – this is our slave, it just holds data, this is a datanode
pc3 – IP: 10.1.20.80 – hostname: rmstest2 – this is our slave, it just holds data, this is a datanode
PreReqs
=======
First make sure the following packages are installed
apt-get install openjdk-6-jdk
apt-get install openssh-server
apt-get install python-software-properties
Make sure java version is 1.6.x if its not google your way out of it “change java version to 1.6” or “change java version to 6”. To check java version:
java -version
If you have IPv6 just disable it:
open /etc/sysctl.conf in the editor of your choice
dd the following lines to the end of the file:
# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
Of course if its absolutely necessary to have ipv6 just disable it for hadoop only – to do that when you have hadoop installed just edit the configuration file conf/hadoop-env.sh which in this case will be in /usr/local/hadoop/conf/hadoop-env.sh and add the following line: export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Make sure each server is on the same broadcast domain and thus are all in the same subnet.
Connect each server up and make sure there link speed is 1 Gigabit. You can have each server connect up to a gigabit switch.
This isnt a strict requirement just for speed. If you connect other switches make sure they have at least 8 Gig/sec link.
Test connectivity with iperf – “apt-get install iperf”. On one pc, called pc1, run “iperf -s” and on another run “iperf pc1 -t 20 -i 2”
Make sure that dns works between the 3 node, each node should know eachother by name. Check to make sure that the following pings work
from knas312: ping will312; ping rmstest2;
from will312: ping knas312; ping rmstest2;
from rmstest2: ping knas312; ping will312;
You can accomplish this by either setting the hostname on the servers to their appropriate name and during the dns resolution process the pcs will find each other on the last resort resolution option: “broadcast” – where dns just sends a mass broadcast to find.
Another way to accomplish the same task is to make enteries for the other pcs in each pc’s /etc/hosts file – note this is not how I did it, as DNS resolution worked.
And obviously you can also set a dns server in your network – note this is not how I did it, as DNS resolution worked.
As an example for the /etc/hosts method, this is how it should look
To make life easier, just use the root user, obviously you can make a hadoop user like hduser. Anyway for this simple setup im using root. To make the switch to another user just make sure in the end all folders involved have read and write access given to the hduser user.
Another final note to make life easier, just set the password on each node for now to the same password.
On each pc run:
passwd root
and for the sake of this example I just set them all to “password”
Make sure that each host can connect to eachother with ssh without a password. So if your prompted for a password that cant happen. This is done with authorized keys.
Easiest way.
On Knas312 – when making the key dont set a passphrase:
ssh-keygen
ssh-copy-id root@will312
ssh-copy-id root@rmstest2
Test on knas312:
ssh root@will312
exit
ssh root@rmstest2
exit
On will312 – when making the key dont set a passphrase:
ssh-keygen
ssh-copy-id root@knas312
ssh-copy-id root@rmstest2
Test on will312:
ssh root@knas312
exit
ssh root@rmstest2
exit
On rmstest2 – when making the key dont set a passphrase:
ssh-keygen
ssh-copy-id root@knas312
ssh-copy-id root@will312
Test on rmstest2:
ssh root@knas312
exit
ssh root@will312
exit
Setting some final variable:
We need to find out what JAVA_HOME is and update it accordingly in the .bashrc file, so that JAVA_HOME is set accordingly, hadoop uses java and it knows which java to use based on this variable
Run “which java” to see that your java install is at “/usr/bin/java” so my JAVA_HOME is, just remote the /bin/java part: /usr
Update the ~/.bashrc file and add the following files:
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr
# Add Hadoop bin/ directory to PATH, so we can recall HADOOP commands by short name rather then long
export PATH=$PATH:$HADOOP_HOME/bin
SETUP
#####
i. install steps are the same on master and slaves
ii. Point conf/hadoop-env.sh to the correct JAVA_HOME on master and slaves
iii. config steps differ between master and slaves
iv. Make the location for where the data will store.
Lets setup slaves first (will312 and rmstest2)
==============================================
Repeat these steps for will312 and rmstest2, so that they are done once on each.
i. install steps:
cd /usr/local/
wget http://mirror.tcpdiag.net/apache/hadoop/common/hadoop-1.1.2/hadoop-1.1.2.tar.gz
tar xzvf hadoop-1.1.2.tar.gz
ln -s hadoop-1.1.2 hadoop
ii. JAVA_HOME on conf/hadoop-env.sh: “vi /usr/local/hadoop/conf/hadoop-env.sh” to enter editor
I made sure this line is at the top:
export JAVA_HOME=/usr
iii. config, make sure each config looks exactly like this. notes on the directives later.
Slave –> conf/core-site.xml: “vi /usr/local/hadoop/conf/core-site.xml” to enter editor
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/data/hadoopdata</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://knas312:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri’s scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri’s authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
Slave –> conf/mapred-site.xml: “vi /usr/local/hadoop/conf/mapred-site.xml” to enter editor
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>knas312:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If “local”, then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>
Slave –> conf/hdfs-site.xml: “vi /usr/local/hadoop/conf/hdfs-site.xml” to enter editor
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>
iv. make the data directory that we mention in the core-site.xml file
mkdir /data/hadoopdata
since we are using root user here we dont have to worry about permissions but if I were using another user like hduser I would make sure that the user had complete read/write/execute permissions into this
Dont forget: Repeat these steps for will312 and rmstest2, so that they are done once on each.
ON MASTER (knas312):
====================
i. install steps:
cd /usr/local/
wget http://mirror.tcpdiag.net/apache/hadoop/common/hadoop-1.1.2/hadoop-1.1.2.tar.gz
tar xzvf hadoop-1.1.2.tar.gz
ln -s hadoop-1.1.2 hadoop
ii. JAVA_HOME on conf/hadoop-env.sh: “vi /usr/local/hadoop/conf/hadoop-env.sh” to enter editor
I made sure this line is at the top:
export JAVA_HOME=/usr
iii. config, make sure each config looks exactly like this. notes on the directives later.
Knas312 –> conf/core-site.xml: “vi /usr/local/hadoop/conf/core-site.xml” to enter editor
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/Sally/hadoopdata</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://knas312:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri’s scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri’s authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
Knas312 –> conf/mapred-site.xml: “vi /usr/local/hadoop/conf/mapred-site.xml” to enter editor
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>knas312:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If “local”, then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>
Knas312 –> conf/hdfs-site.xml: “vi /usr/local/hadoop/conf/hdfs-site.xml” to enter editor
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>
iv. make the data directory that we mention in the core-site.xml file
mkdir /Sally/hadoopdata
since we are using root user here we dont have to worry about permissions but if I were using another user like hduser I would make sure that the user had complete read/write/execute permissions into this
ABOUT THE PROPERTIES:
=====================
ABOUT THE PROPERTIES WE JUST SAW:
* hadoop.tmp.dir: this is where you pick the location to store the hdfs, since the hdfs sits a top an already prepared filesystem, I just picked a location which had alot of space available. On Knas312 I put it on the btrfs volume which had 3 tb. On will312 and rmstest2 I put it on the btrfs volume which had 200 gb.
* fs.default.name: this points to the master, the one that houses the namespace, and the port that will be used by hdfs
mapred.job.tracker:  this points to the master, the one that delegates the jobs, and the port that will be used by mapreduce
* dfs.replication: if this is set to 1 then data is not replicated, thus if a file is only stored on will312, and it crashes then we just lost that file. If this is set to 2, then a file created will be stored in 2 locations, and if one locations crashes the master unit will delegate another location to get a copy so that we are always having 2 copies. A good value is 3, but since we are only working with 2 units here then 3 is fine. Also you can fine tune this per file as I understand, this is just the default setting, but it can be overrode per file.
MORE INTERESTING PROPERTIES – exerpt:
There are some other configuration options worth studying. The following information is taken from the Hadoop API Overview.
In file conf/mapred-site.xml:
* mapred.local.dir: Determines where temporary MapReduce data is written. It also may be a list of directories.
* mapred.map.tasks: As a rule of thumb, use 10x the number of slaves (i.e., number of TaskTrackers).
* mapred.reduce.tasks: As a rule of thumb, use num_tasktrackers * num_reduce_slots_per_tasktracker * 0.99. If num_tasktrackers is small (as in the case of this tutorial), use (num_tasktrackers – 1) * num_reduce_slots_per_tasktracker.
CONFIRM
#######
Just confirm that the following can happen:
1. dns works everywhere so that everyone can ping eachother by their hostname
2. passwordless ssh logins work between each other
FORMAT THE FILESYSTEM
#####################
Warning: Do not format a running cluster because this will erase all existing data in the HDFS filesytem!
On the master, knas312, do this:
hadoop namenode -format
Note: since this is installed in /usr/local/hadoop and the hadoop binary file which executes here is in /usr/local/hadoop/bin, the way I am able to run this is because we altered the path in the ~/.bashrc file, so if you need to relog in or just run that path command really quick, again the path command is – and again this is so you dont have to give full pathnames to the programs we use most often: “export PATH=$PATH:/usr/local/hadoop/bin”
NOW TIME TO LAUNCH THE SYSTEM
#############################
PRELAUNCH:
Before launching it look at how each system looks like before we run it, you can run the process stat command for jva: “jps”
It should look like this when we are not running anything:
root@Knas312:~# jps
3283 Main
24063 Jps
The #s will be different as those are the processes ids and they are essentially randomly generated, also the output on the slaves will be the same.
LAUNCH:
On master
start-dfs.sh
CONFIRM:
start-dfs.sh this launches the hdfs filesystem here and across the slaves, it communicates via ssh to let the slaves launch their hdfs java programs. On the master we are running NameNode, which is the head unit of hdfs it holds the metadata etc, and we also launch Datanode on the master unit. On the slaves we only launch Datanodes. So start-dfs.sh looks at the masters and slaves file ONLY on the master unit, knas312. We put on there that the master is knas312 thus it got the namenode process. For slaves we put knas312, will312, and rmstest2 thus each unit got the datanode, meaning each unit can house data, meaning the volume space on each of those units will be summed up for the total capacity of the unit (when thinking of free space take here is the formula: TOTAL CAPACITY – DATA SIZE * AVERAGE REPLICATION FACTOR = FREE SPACE.)
This is how the master will look:
root@Knas312:~# jps
4646 DataNode
4767 SecondaryNameNode
3283 Main
4528 NameNode
24063 Jps
This is how the slave will look:
root@will312:/data/hadoopdata# jps
8012 Main
9049 DataNode
26042 Jps
What if they dont look like that?
Well something bad happened look at the hadoop logs on the slave and master.
On slave, you can examine the success or failure of this command by inspecting the log file logs/hadoop-hduser-datanode-slave.log.
or via web client http://Knas:50070
NOTE ABOUT FREE SPACE:
######################
* Capacity: This is the sum of total space of each node. In our case this 3 TB from knas312 and 200gb from will312 and rmstest2 for a total capacity of the hdfs volume being about 3.4 TB.
* Data size: This is the space the actual unique data takes up prior to replication. Pretend we put on a 40 gb file, with a replication factor of 2 set ont he system this is still a 40gb, even if the rep factor was 100 this would still be 40gb. Note this is not the actual space used but as far as the end user is concered it is as that is all that is stored.
* Average replication factor: since replication factors can be varied between files, but follow a default number that we set earlier, thus there is a concept of averages, or else this would just say replication factor. In our case this should be 2 since we are not changing it per file.
* DFS Space used: This is the actual space used by the data and its replication. So in the case of storing a 40gb file with a replication factor of 2, then the file would use up 80 gb of DFS space. It would literally use up 80gb of disk space, because of the replication factor. Now of course if the replication factor is 1 for a file then the DFS space used on that file is the same as the Data size of that file.
* NON DFS Space used: Since we put these volumes on filesystem that already exist, those filesystems might house other files and folders that take up space. For example take knas312 its a NAS unit that has 3tb drive and one of the mounts on the system is /Sally/, well we put our hadoop stuff into /Sally/hadoopdata/, but what if prior to that, and not even prior but currently, there is a folder like this /Sally/Videos, and /Sally/Music that has nothing at all to do with hadoop, well the files that go there will affect the final free space and the end all of if we can put a file on the disk.
* Free space: This is the free space of the unit, but instead of using the virtual number of data size to subtract the use we subtract the use out with DFS Space used as that is the actual disk space used.
CHECK OUT THESE COMMANDS:
hadoop dfsadmin -report
hadoop fsck /
USEFUL FOR COMPARE BETTER DOESNT SHOW STATS PER DATA NODE:
hadoop dfsadmin -report | egrep -i “capacity|DFS Remaining|Datanodes|Name|used” | head -n6
hadoop fsck / | egrep -i “size|replication”
LETS START MAPREDUCE:
#####################
Well great so the hdfs is started and we can implement destributed filesystem but what if we want to implement parallization of compute power.
bin/start-mapred.sh on the machine you want the JobTracker to run on (the master), so in this case its the Knas312.
bin/start-mapred.sh
This will bring up the MapReduce cluster with the JobTracker running on the machine you ran the previous command on, and TaskTrackers on the machines listed in the conf/slaves file.
So JPS will look like this:
This is how the master will look:
root@Knas312:~# jps
16017 Jps
14799 NameNode
15686 TaskTracker
14880 DataNode
15596 JobTracker
14977 SecondaryNameNode
This is how the slave will look – on will312 and rmstest2:
root@will312:/data/hadoopdata# jps
15183 DataNode
15897 TaskTracker
16284 Jps
* Note I hope you are ignoring the PIDs as I am doing alot of copy pasting and they might of got mixed up but the process names did not I can assure you of that
RECAP OF RESPONSIBILITES
########################
Jps = this is the java process watcher and thus allows us to run jps
NameNode = This is the head of the hdfs
TaskTracker = This does a little job on a node
DataNode = This stores some data on a node
JobTracker = this is the head of the mapreduce
SecondaryNameNode = This eh as the name suggest does things.
Main = This is the main java process that should always be running
THE WEB INTERFACES
##################
These can be used to scope out the filesystem, check out running jobs etc.
http://knas312:50070/ – web UI of the NameNode daemon
http://knas312:50030/ – web UI of the JobTracker daemon
http://knas312:50060/ – web UI of the TaskTracker daemon
USING THE SYSTEM
################
Im going to present an example of using the hadoop system to do the  typical wordcount example as it gets the idea across of how to use the filesystem portion of hadoop and also how to use the mapreduce portion (definitely not how to program it, but how to use the programs)
GETTING FILES OVER TO THE SYSTEM (HDFS)
=======================================
Before we get started check out the empty filesystem.
hadoop dfs -ls /
Also check out how this empty FS is stored on the different boxes locally:
on knas312: find /Sally/hadoopdata -type f
on will312: find /data/hadoopdata -type f
on rmstest2: find /data/hadoopdata -type f
Lets download some books we will use to word count, some famous free text in utf8 meaning its just a simple text file
mkdir /tmp/books
cd /tmp/books
wget http://www.gutenberg.org/ebooks/20417.txt.utf-8
wget http://www.gutenberg.org/ebooks/5000.txt.utf-8
wget http://www.gutenberg.org/ebooks/4300.txt.utf-8
wget http://www.gutenberg.org/ebooks/132.txt.utf-8
wget http://www.gutenberg.org/ebooks/1661.txt.utf-8
wget http://www.gutenberg.org/ebooks/972.txt.utf-8
wget http://www.gutenberg.org/ebooks/19699.txt.utf-8
Copying it over – Note the /books folder is automatically made in the hdfs
hadoop dfs -copyFromLocal /tmp/books /books
Check it out
hadoop dfs -ls /
hadoop dfs -ls /books
Also check out how its stored on the different boxes – note how from knas312 I remotely run a command:
on knas312 from knas312: find /Sally/hadoopdata -type f
on will312 from knas312: ssh root@will312 find /data/hadoopdata -type f
on rmstest2 from knas312: ssh root@rmstest2 find /data/hadoopdata -type f
RUNNING A JAVA MAPREDUCE PROCESS (MAPREDUCE)
============================================
Let do the typical wordcount example as it gets the idea across, first of all the function is already in place.
On my install of hadoop-1.1.2, the jar function we need to use is here: /usr/local/hadoop/hadoop-examples-1.1.2.jar
hadoop jar hadoop-examples-1.1.2.jar wordcount
Note if you ever get this error: Exception in thread “main” java.io.IOException: Error opening job jar: hadoop-examples-1.1.2.jar. Then just have a full pointer to the file location like this
hadoop jar /usr/local/hadoop/hadoop-examples-1.1.2.jar wordcount
That spits out:
Usage: wordcount <in> <out>
So we know we implemented it correctly.
The correct way to run it – note the books-out will have the results and its automatically made (dont worry it will make folder by itself)
hadoop jar /usr/local/hadoop/hadoop-examples-1.1.2.jar wordcount /books /books-out
While job running you will see the map % incremenet and the reduce, usually map finishes first but sometimes they work together. Also you can track the process from the web ui. I usually time the commands, but this one usually takes 1 min 35 sec on my cluster: time hadoop jar /usr/local/hadoop/hadoop-examples-1.1.2.jar wordcount /books /books-out
This brings an important point: some things hadoop is slower at doing vs a system that is running one small task without distributed computing.
hadoop dfs -copyToLocal /books-out /tmp/books-out
Another way:
hadoop dfs -getmerge /books-out /tmp/books-out1
So now the wordcount results are in the local system, lets check em out:
cd /tmp/books-out
ls -lisah
root@Knas312:/tmp/books-out# ls -lisah
total 1.4M
1369999    0 drwxr-xr-x 1 root root   50 Jul 16 21:51 .
   2487    0 drwxrwxrwt 1 root root 1.3K Jul 16 21:51 ..
1370001    0 drwxr-xr-x 1 root root   14 Jul 16 21:51 _logs
1370005 1.4M -rw-r–r– 1 root root 1.4M Jul 16 21:51 part-r-00000
1370000    0 -rw-r–r– 1 root root    0 Jul 16 21:51 _SUCCESS
Notice how the part-r-00000 is the biggest file, thats our result!
head part-r-0000
OUTPUT:
”       34
“‘A     1
“‘About 1
“‘Absolute      1
“‘Ah!’  2
“‘Ah,   2
“‘Ample.’       1
“‘And   10
“‘Arthur!’      1
“‘As    1
Looks like its sorted by alphabet, notice wordcount is not smart enough to realize that ” and ‘ are not letters. That can be implemented by the programmer.
Lets sort by occurance, as its the second column, we will numerically and reverse sort on column 2.
sort -nrk 2 part-r-00000 > sorted
head sorted
OUTPUT:
the     79766
of      48560
and     33095
to      24484
in      23546
a       23206
is      14045
that    10501
with    8430
was     8130
The JAVA script code that made this possible from: http://wiki.apache.org/hadoop/WordCount
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }
 }
 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterable<IntWritable> values, Context context)
      throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
 }
 public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
        Job job = new Job(conf, “wordcount”);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);
    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    job.waitForCompletion(true);
 }
}
IN PYTHON?
==========
Why python? its easier, you dont have to use java, here I show you 2 ways to do it, there is a 3rd way involving converting jython which is annoying and takes away from the python feel.
RUNNING A PYTHON MAPREDUCE PROCESS (MAPREDUCE) – SIMPLE
=======================================================
cd /usr/local/hadoop/
mkdir py
cd py
Make the following scripts:
vi mapper.py
PUT THIS IN:
#!/usr/bin/env python
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
    # remove leading and trailing whitespace
    line = line.strip()
    # split the line into words
    words = line.split()
    # increase counters
    for word in words:
        # write the results to STDOUT (standard output);
        # what we output here will be the input for the
        # Reduce step, i.e. the input for reducer.py
        #
        # tab-delimited; the trivial word count is 1
        print ‘%s\t%s’ % (word, 1)</pre>
vi reducer.py
PUT THIS IN:
#!/usr/bin/env python
from operator import itemgetter
import sys
current_word = None
current_count = 0
word = None
# input comes from STDIN
for line in sys.stdin:
    # remove leading and trailing whitespace
    line = line.strip()
    # parse the input we got from mapper.py
    word, count = line.split(‘\t’, 1)
    # convert count (currently a string) to int
    try:
        count = int(count)
    except ValueError:
        # count was not a number, so silently
        # ignore/discard this line
        continue
    # this IF-switch only works because Hadoop sorts map output
    # by key (here: word) before it is passed to the reducer
    if current_word == word:
        current_count += count
    else:
        if current_word:
            # write result to STDOUT
            print ‘%s\t%s’ % (current_word, current_count)
        current_count = count
        current_word = word
# do not forget to output the last word if needed!
if current_word == word:
    print ‘%s\t%s’ % (current_word, current_count)
Make em both executable:
chmod +x *
Lets run it on the books we still have in there:
cd /usr/local/hadoop/py
time hadoop jar /usr/local/hadoop/contrib/streaming/hadoop-streaming-1.1.2.jar -file mapper.py -mapper mapper.py -file reducer.py -reducer reducer.py -input /books -output /books-out-py-simple
Note for the -file and -mapper and -file and -reducer make sure to use the full path of the reducer and mapper files, just use local if its in the current working directory. Also dont ask me why you have to specify the reducer once with the file and once with reducer and same for the mapper… beats me…
Here is a note sandwich for you:
With this simple implementation you can actually run it locally for a test.
cd /usr/local/hadoop/py
echo “here i am to win here i am to be awesome” | ./mapper.py
echo “here i am to win here i am to be awesome” | ./mapper.py | sort -k1,1 | ./reducer.py
echo “here i am to win here i am to be awesome” | ./mapper.py | sort -k1,1 | ./reducer.py
echo “here i am to win here i am to be awesome” | ./mapper.py | sort -k1,1 | ./reducer.py | sort -k2
RUNNING A PYTHON MAPREDUCE PROCESS (MAPREDUCE) – IMPROVED
=========================================================
So unlike the other script this one uses more of the hadoop api and thus cant be used with the local system. So you cant do something like this:
echo STUFF | mapper.py | sort -k1,1 | reducer.py
Why it doesnt work – exerpt from citation:
Precisely, we compute the sum of a word’s occurrences, e.g. (“foo”, 4), only if by chance the same word (foo) appears multiple times in succession. In the majority of cases, however, we let the Hadoop group the (key, value) pairs between the Map and the Reduce step because Hadoop is more efficient in this regard than our simple Python scripts.
Exerpt from citation:
Generally speaking, iterators and generators (functions that create iterators, for example with Python’s yield statement) have the advantage that an element of a sequence is not produced until you actually need it. This can help a lot in terms of computational expensiveness or memory consumption depending on the task at hand.
So make the following 2 scriptss – note the i for “improved” at the end of the filename:
cd /usr/local/hadoop/py
vi mapperi.py
MAKE IT HAVE THIS:
#!/usr/bin/env python
“””A more advanced Mapper, using Python iterators and generators.”””
import sys
def read_input(file):
    for line in file:
        # split the line into words
        yield line.split()
def main(separator=’\t’):
    # input comes from STDIN (standard input)
    data = read_input(sys.stdin)
    for words in data:
        # write the results to STDOUT (standard output);
        # what we output here will be the input for the
        # Reduce step, i.e. the input for reducer.py
        #
        # tab-delimited; the trivial word count is 1
        for word in words:
            print ‘%s%s%d’ % (word, separator, 1)
if __name__ == “__main__”:
    main()
vi reduceri.py
MAKE IT HAVE THIS:
#!/usr/bin/env python
“””A more advanced Reducer, using Python iterators and generators.”””
from itertools import groupby
from operator import itemgetter
import sys
def read_mapper_output(file, separator=’\t’):
    for line in file:
        yield line.rstrip().split(separator, 1)
def main(separator=’\t’):
    # input comes from STDIN (standard input)
    data = read_mapper_output(sys.stdin, separator=separator)
    # groupby groups multiple word-count pairs by word,
    # and creates an iterator that returns consecutive keys and their group:
    #   current_word – string containing a word (the key)
    #   group – iterator yielding all [“&lt;current_word&gt;”, “&lt;count&gt;”] items
    for current_word, group in groupby(data, itemgetter(0)):
        try:
            total_count = sum(int(count) for current_word, count in group)
            print “%s%s%d” % (current_word, separator, total_count)
        except ValueError:
            # count was not a number, so silently discard this item
            pass
if __name__ == “__main__”:
    main()
Make em executable:
chmod +x *
cd /usr/local/hadoop/py
time hadoop jar /usr/local/hadoop/contrib/streaming/hadoop-streaming-1.1.2.jar -file mapperi.py -mapper mapperi.py -file reducer.py -reducer reduceri.py -input /books -output /books-out-py-improved
HOW TO STOP HADOOP
##################
MY WAY:
=======
I just stop it like this and it seems to work great
From master:
stop-all.sh
From slaves – will312 and rmstest2:
stop-all.sh
Then repeat from master and on slaves one more time.
Confirm its all ended with jps.
jps
OUTPUT: should only have Main and JPS
Here is a simple copy pasteable to stop everything – since copy pasteable I use full path to stop-all commands, and also note how I stop all twice just in case:
/usr/local/hadoop/stop-all.sh
ssh root@rmstest2 /usr/local/hadoop/stop-all.sh
ssh root@will312 /usr/local/hadoop/stop-all.sh
/usr/local/hadoop/stop-all.sh
ssh root@rmstest2 /usr/local/hadoop/stop-all.sh
ssh root@will312 /usr/local/hadoop/stop-all.sh
jps
ssh root@rmstest2 jps
ssh root@will312 jps
Note: all of these bin scripts are in /usr/local/hadoop/bin, but since path has hadoop path in it from .bashrc file we can can refer by short name
AUTHORS WAY:
============
Technically this is the more politically correct way. End it all in the reverse order you started it up.
So we started it up like so:
1. start-dfs.sh which started hdfs on the master and then on the slaves
2. start-mapred.sh which started mapreduce on the master and then on the slaves
So the right order to close out will be:
stop-mapred.sh
stop-dfs.sh
I think my way is more thourough, and there is nothing in those scripts as far as my logic is concerned that will hurt anything.
HOW TO BACKUP SETTINGS
######################
Well lets see here, Im not going to implement a restore function, I will just backup the main settings to text.
Just run this on each machine and save output:
hostname; for i in hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves; do echo ‘=====`hostname` –> conf/$i====’; cat /usr/local/hadoop/conf/$i | sed ‘/^$/d’; echo; done;
Note the sed part just removes blank lines as they are annoying and throw off the whitespace formating I implemented on it. Either way blankline or not they do not affect the running of hadoop.
So here is a little copy pasteable script to get all the backups at once, just run this from the Master – knas312:
Note this script brings up an important point, I hope your able to passwordlessly connect to yourself.
touch /usr/bin/backup-hadoop-confs.sh; chmod +x /usr/bin/backup-hadoop-confs.sh;
vi /usr/bin/backup-hadoop-confs.sh
PUT THIS IN IT:
#!/bin/bash
SLAVES=”rmstest2 will312″
echo DATE `date`
echo ===================================
echo
echo MASTER CONFIG
echo =============
echo
echo “#### CONFIGS OF ${h}: ####”
(for i in hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves; do echo “~~~`hostname` –> conf/$i~~~”; cat /usr/local/hadoop/conf/$i | sed ‘/^$/d’; echo; done;)
echo SLAVE CONFIGS
echo =============
echo
for h in $SLAVES
do
echo “#### CONFIGS OF ${h}: ####”
echo
ssh root@$h ‘(for i in hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves; do echo “~~~`hostname` –> conf/$i~~~”; cat /usr/local/hadoop/conf/$i | sed ‘/^$/d’; echo; done;)’
done
If you want you can implment it as a cronjob so it runs daily or something so you never have to worry about it.
HOW TO PREPARE FOR UPGRADE
##########################
Since we made a symlink from hadoop-1.1.2 to hadoop, when some updates of stable status come out, just wget and extract and resymlink hadoop to the new location and then just copy the same configs from the old one, or restore manually from the backup using good old notepad(or cat) and vi( or whatever) – considering those still work the same way
IMPORTANT FINAL NOTES THAT DIDNT GET A CHANCE TO SAY YET
########################################################
These will be in random order and are exerpts from the cited source from above.
Masters vs. Slaves:
Typically one machine in the cluster is designated as the NameNode and another machine the as JobTracker, exclusively. These are the actual “master nodes”. The rest of the machines in the cluster act as both DataNode and TaskTracker. These are the slaves or “worker nodes”.
You can also start an Hadoop daemon manually on a machine via bin/hadoop-daemon.sh start [namenode | secondarynamenode | datanode | jobtracker | tasktracker], which will not take the “conf/masters“ and “conf/slaves“ files into account.
Here are more details regarding the conf/masters file:
Despite its name, the conf/masters file defines on which machines Hadoop will start secondary NameNodes in our multi-node cluster. In our case, this is just the master machine. The primary NameNode and the JobTracker will always be the machines on which you run the bin/start-dfs.sh and bin/start-mapred.sh scripts, respectively (the primary NameNode and the JobTracker will be started on the same machine if you run bin/start-all.sh).
The secondary NameNode merges the fsimage and the edits log files periodically and keeps edits log size within a limit. It is usually run on a different machine than the primary NameNode since its memory requirements are on the same order as the primary NameNode. The secondary NameNode is started by “bin/start-dfs.sh“ on the nodes specified in “conf/masters“ file.
The conf/slaves file on master is used only by the scripts like bin/start-dfs.sh or bin/stop-dfs.sh. For example, if you want to add DataNodes on the fly (which is not described in this tutorial yet), you can “manually” start the DataNode daemon on a new slave machine via bin/hadoop-daemon.sh start datanode. Using the conf/slaves file on the master simply helps you to make “full” cluster restarts easier.
ON MASTER:
conf/masters:
knas312
conf/slaves:
knas312
will312
rmstest2
ON SLAVES – these still point to localhost as they are no longer important:
conf/masters:
localhost
conf/slaves:
localhost
The job will read all the files in the HDFS directory /user/hduser/gutenberg, process it, and store the results in the HDFS directory /user/hduser/gutenberg-output. In general Hadoop will create one output file per reducer; in our case however it will only create a single file because the input files are very small.
If you want to modify some Hadoop settings on the fly like increasing the number of Reduce tasks, you can use the “-D” option: hadoop jar hadoop*examples*.jar wordcount -D mapred.reduce.tasks=16 /user/hduser/gutenberg /user/hduser/gutenberg-output
An important note about mapred.map.tasks: Hadoop does not honor mapred.map.tasks beyond considering it a hint. But it accepts the user specified mapred.reduce.tasks and doesn’t manipulate that. You cannot force mapred.map.tasks but you can specify mapred.reduce.tasks
How good mapreduce goes while process runs from master:
hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar hadoop-examples-1.0.3.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output
… INFO mapred.FileInputFormat: Total input paths to process : 7
… INFO mapred.JobClient: Running job: job_0001
… INFO mapred.JobClient:  map 0% reduce 0%
… INFO mapred.JobClient:  map 28% reduce 0%
… INFO mapred.JobClient:  map 57% reduce 0%
… INFO mapred.JobClient:  map 71% reduce 0%
… INFO mapred.JobClient:  map 100% reduce 9%
… INFO mapred.JobClient:  map 100% reduce 68%
… INFO mapred.JobClient:  map 100% reduce 100%
…. INFO mapred.JobClient: Job complete: job_0001
… INFO mapred.JobClient: Counters: 11
… INFO mapred.JobClient:   org.apache.hadoop.examples.WordCount$Counter
… INFO mapred.JobClient:     WORDS=1173099
… INFO mapred.JobClient:     VALUES=1368295
… INFO mapred.JobClient:   Map-Reduce Framework
… INFO mapred.JobClient:     Map input records=136582
… INFO mapred.JobClient:     Map output records=1173099
… INFO mapred.JobClient:     Map input bytes=6925391
… INFO mapred.JobClient:     Map output bytes=11403568
… INFO mapred.JobClient:     Combine input records=1173099
… INFO mapred.JobClient:     Combine output records=195196
… INFO mapred.JobClient:     Reduce input groups=131275
… INFO mapred.JobClient:     Reduce input records=195196
… INFO mapred.JobClient:     Reduce output records=131275
How good mapreduce goes while process runs from slave:
…and on slave for its TaskTracker daemon.
logs/hadoop-hduser-tasktracker-slave.log (on slave)
… INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction: task_0001_m_000000_0
… INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction: task_0001_m_000001_0
… task_0001_m_000001_0 0.08362164% hdfs://master:54310/user/hduser/gutenberg/ulyss12.txt:0+1561677
… task_0001_m_000000_0 0.07951202% hdfs://master:54310/user/hduser/gutenberg/19699.txt:0+1945731
<snipp>
… task_0001_m_000001_0 0.35611463% hdfs://master:54310/user/hduser/gutenberg/ulyss12.txt:0+1561677
… Task task_0001_m_000001_0 is done.
… task_0001_m_000000_0 1.0% hdfs://master:54310/user/hduser/gutenberg/19699.txt:0+1945731
… LaunchTaskAction: task_0001_m_000006_0
… LaunchTaskAction: task_0001_r_000000_0
… task_0001_m_000000_0 1.0% hdfs://master:54310/user/hduser/gutenberg/19699.txt:0+1945731
… Task task_0001_m_000000_0 is done.
… task_0001_m_000006_0 0.6844295% hdfs://master:54310/user/hduser/gutenberg/132.txt:0+343695
… task_0001_r_000000_0 0.095238104% reduce > copy (2 of 7 at 1.68 MB/s) >
… task_0001_m_000006_0 1.0% hdfs://master:54310/user/hduser/gutenberg/132.txt:0+343695
… Task task_0001_m_000006_0 is done.
… task_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at 1.02 MB/s) >
<snipp>
… task_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at 1.02 MB/s) >
… task_0001_r_000000_0 0.23809525% reduce > copy (5 of 7 at 0.32 MB/s) >
… task_0001_r_000000_0 0.6859089% reduce > reduce
… task_0001_r_000000_0 0.7897389% reduce > reduce
… task_0001_r_000000_0 0.86783284% reduce > reduce
… Task task_0001_r_000000_0 is done.
… Received ‘KillJobAction’ for job: job_0001
… task_0001_r_000000_0 done; removing files.
… task_0001_m_000000_0 done; removing files.
… task_0001_m_000006_0 done; removing files.
… task_0001_m_000001_0 done; removing files.
THE END
#######
Enjoi
########
APPENDIX
########
BEST PRACTICE AND GOOD EXERPTS ON HADOOP IN USE
###############################################
Yahoo uses EXT3 on their clusters.
JBOD or RAID: No raid, use JBOD, raid slows stuff down RAID 0 will go at speed of slowest drive, the replication of hdfs will take care of any fails raid can recover from any ways, just use JBOD.
Best filesystem: ext3, ext4, xfs
Cool filesystems? No zfs or btrfs, snapshots or things that protect data are not needed and slow everything down.
Whats JBOD?
JBOD meaing each disk mounted to seperate folder not a jbod that combines disks into one volume (some consider lvm or mdadm in linear mode to be jbod, but no here)
How to mount EXT3?
mount filesystem with noatime option
disk1 = /dev/sdc -> /dfs/1
disk2 = /dev/sdd -> /dfs/2
Considering /dev/sda and /dev/sdb are in raid1 for the OS of course.
The master unit where NameNode is single point of fail.
NameNode is heart and brain of HDFS. This is a single point of failure for HDFS. Make sure NameNode is
fully redundant by all means.
For os use RAID1.
Mount your data disks with noatime (e.g. /dev/sdc1 /mnt/disk3 ext3 defaults,noatime 1 2 which btw. implies nodiratime)
whats no atime?
 Linux has a special mount option for file systems called noatime that can be added to each line that addresses one file system in the /etc/fstab file. If a file system has been mounted with this option, reading accesses to the file system will no longer result in an update to the atime information associated with the file like we have explained above. The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read. Since writes can be somewhat expensive, this can result in measurable performance gains. Note that the write time information to a file will continue to be updated anytime the file is written to. In our example below, we will set the noatime option to our /chroot file system.
Hi,
We at Yahoo did some Hadoop benchmarking experiments on clusters with JBOD
and RAID0. We found that under heavy loads (such as gridmix), JBOD cluster
performed better.
Gridmix tests:
Load: gridmix2
Cluster size: 190 nodes
Test results:
RAID0: 75 minutes
JBOD:  67 minutes
Difference: 10%
Tests on HDFS writes performances
We ran map only jobs writing data to dfs concurrently on different clusters.
The overall dfs write throughputs on the jbod cluster are 30% (with a 58
nodes cluster) and 50% (with an 18 nodes cluster) better than that on the
raid0 cluster, respectively.
To understand why, we did some file level benchmarking on both clusters.
We found that the file write throughput on a JBOD machine is 30% higher than
that on a comparable machine with RAID0. This performance difference may be
explained by the fact that the throughputs of different disks can vary 30%
to 50%. With such variations, the overall throughput of a raid0 system may
be bottlenecked by the slowest disk.
Scaling Hadoop to 4000 nodes at Yahoo!
By aanand – Tue, Sep 30, 2008 10:04 AM EDT
RecommendTweet
We recently ran Hadoop on what we believe is the single largest Hadoop installation, ever:
• 4000 nodes
• 2 quad core Xeons @ 2.5ghz per node
• 4x1TB SATA disks per node
• 8G RAM per node
• 1 gigabit ethernet on each node
• 40 nodes per rack
• 4 gigabit ethernet uplinks from each rack to the core (unfortunately a misconfiguration, we usually do 8 uplinks)
• Red Hat Enterprise Linux AS release 4 (Nahant Update 5)
• Sun Java JDK 1.6.0_05-b13
• So that’s well over 30,000 cores with nearly 16PB of raw disk!
The exercise was primarily an effort to see how Hadoop works at this scale and gauge areas for improvements as we continue to push the envelope. We ran Hadoop trunk (post Hadoop 0.18.0) for these experiments.
Scaling has been a constant theme for Hadoop: we, at Yahoo!, ran a modestly sized Hadoop cluster of 20 nodes in early 2006; currently Yahoo! has several clusters around the 2000 node mark.
HDFS
The scaling issues have always been the main focus in designing any HDFS feature. Despite these efforts, attempts to scale the cluster up in the past sometimes resulted in some unpredictable effects. One of the most memorable examples was the cascading crash described in HADOOP-572, when failure of just a handful of data-nodes made the whole cluster completely dysfunctional in a matter of minutes.
This time the testing went smoothly and we observed quite decent file system performance. We did not see any startup problems; the name-node did not drown in self-serving heartbeats and block reports. Note, that heartbeat and block intervals were configured with the default values of 3 seconds and 1 hour respectively.
We ran a series of standard DFSIO benchmarks on the experimental cluster. The main purpose of this was to test how HDFS handles load of 14,000 clients performing writes or reads simultaneously.
HDFS Cluster Statistics
Capacity  : 14.25 PB
DFS Remaining : 10.61 PB
DFS Used : 233.44 TB
DFS Used% : 1.6 %
Live Nodes : 4049
Dead Nodes : 226
Map-Reduce Cluster Statistics
Nodes: 3561
Map Slots: 4 slots per node
Reduce Slots: 4 slots per node
DFSIO benchmark is a map-reduce job where each map task opens a file and writes to or reads from it, closes it, and measures the i/o time. There is only one reduce task, which aggregates and averages individual times and sizes. The result is the average throughput of a single i/o that is how many bytes per second was written or read on average by a single client.
In the test performed each of the 14,000 map tasks writes (reads) 360 MB (about 3 blocks) of data into a single file with a total of 5.04 TB for the whole job.
The table below compares the 4,000-node cluster performance with one of our 500-node clusters.
Table 1. Throughput
  500-node cluster 4000-node cluster
  write read write read
number of files 990 990 14,000 14,000
file size (MB) 320 320 360 360
total MB processes 316,800 316,800 5,040,000 5,040,000
tasks per node 2 2 4 4
avg. throughput (MB/s) 5.8 18 40 66
The 4000-node cluster throughput was 7 times better than 500’s for writes and 3.6 times better for reads even though the bigger cluster carried more (4 v/s 2 tasks) per node load than the smaller one.
Map-Reduce
The primary area of concern was the JobTracker and how it would react to this scale (we had never subjected the JobTracker to heartbeats flowing in from 4000 tasktrackers since it isn’t a very common use-case when we use HoD). We were also concerned about the JobTracker’s memory usage as it serviced thousands of user-jobs.
The initial results were slightly worrisome – GridMix, the standard benchmark, took nearly 2 hours to complete and we lost a fairly large number of tasktrackers since the JobTracker couldn’t handle them. For good measure, we couldn’t run a 6TB sort either; we kept losing tasktrackers. (We routinely run sort benchmarks which sort 1TB, 5TB and 9TB of data.)
Of course, brand-new hardware didn’t help since we kept losing disks, neither did the fact that we had a misconfigured network which let us use only 4 out of the 8 uplinks available from each rack to the backbone (effectively cutting the available bandwidth in half). On the bright side memory usage didn’t seem to be a problem and the JobTracker stood up to thousands of user-jobs without problems.
We then went in armed with the YourKit(TM) profiler – we needed to peek into the JobTracker’s guts while it was faltering. This basically meant going through the CPU/Memory/Monitors profiles of the JobTracker with a fine-toothed comb. To cut a long story short, here are some of the curative actions we took based those observations:
• HADOOP-3863 – Fixed a bug which caused extreme contention for a single, global lock during serialization of Java strings for Hadoop RPCs.
• HADOOP-3848 – Cut down wasteful RPCs during task-initialization.
• HADOOP-3864 – Fixed locks in the JobTracker during job-initialization to prevent starvation of tasktrackers’ heartbeats which caused the huge number of ‘lost tasktrackers’.
• HADOOP-3875 – Fixed tasktrackers to gracefully scale heartbeat intervals when the JobTracker is under duress.
• HADOOP-3136 – Assign multiple tasks to the tasktrackers during each heartbeat, this significantly cuts down the number of heartbeats in the system.
The result of these improvements (sans HADOOP-3136 which wasn’t ready in time):
1. GridMix came through slightly under an hour – a significant improvement from where we started.
2. The sort of 6TB of data completed in 37 minutes.
3. We had the cluster run more than 5000 Map-Reduce jobs in a window of around 6 hours and the JobTracker came through without any issues.
Overall, the results are very reassuring with respect to the ability of Hadoop to scale out. Of course we have only scratched the surface and have miles to go!
Konstantin V Shvachko
Arun C Murthy
Yahoo!
Dual Quad Core with 32 GB of RAM and 4 or more disks, dont bother with RAID
Openlogics setup:
Dual quad core or Dual Hex Core Dell boxes, 32-64gb ram ECC(recommended by google), six 2 TB enterprise Hardrvies, RAID1 on 2 drives for the OS, Hadoop binary files, HBase, Solr, NFS mounts, Hadoop datanode gets remaining drives, Use redundant enterprise switches and Dual / Quad Gigabit Nics.
Costs if do it over cloud:
100 TB * $0.1/GB/Month = 120K /year
Double Extra Large instances
13 EC2 compute units 34.2 GB ram
20 instances a 1$ an hour at 8760 huors per year = $175k per year
3 year reserved instances
20 * 4k = $80k upfront to reserve
20*34 cents per hour * 8760 hours per year 8 3 years = $86k/year to operate
Totals for 20 virtual machines:
1st year: 120k+80k+86k = 286k
2nd and 3rd year = 120k+86k = 206k
Average: (286k+206k+206k) /3 =  232k per year
Solution? Buy your own
20 Dell servers with 12 CPU cores with 32 GB RAM and 5 TB disk = $160k
Over 33 EC2 compute units each
Total 53k / year (amortized over 3 years)
So EC2 is more expensive because we dont host or maintaince it right, so the costs are worth it? no quiet, dont forget:
System administration, monitoring, debuggin and support.
What fails most often? Power supplies, hardrives, kernel panics, zombie processes, dropped packets, haddop dta nodes, hbase regionservers, stargate servers, solr servers, the mapreduce code can get strange results leading to program fails
One alternative to disk raid, is hdfs raid:
HDFS RAID
Overview:
The HDFS RAID module provides a DistributedRaidFileSystem (DRFS) that is used along with an instance of the Hadoop DistributedFileSystem (DFS). A file stored in the DRFS (the source file) is divided into stripes consisting of several blocks. For each stripe, a number of parity blocks are stored in the parity file corresponding to this source file. This makes it possible to recompute blocks in the source file or parity file when they are lost or corrupted.
The main benefit of the DRFS is the increased protection against data corruption it provides. Because of this increased protection, replication levels can be lowered while maintaining the same availability guarantees, which results in significant storage space savings.
Architecture and implementation:
HDFS Raid consists of several software components:
the DRFS client, which provides application access to the the files in the DRFS and transparently recovers any corrupt or missing blocks encountered when reading a file,
the RaidNode, a daemon that creates and maintains parity files for all data files stored in the DRFS,
the BlockFixer, which periodically recomputes blocks that have been lost or corrupted,
the RaidShell utility, which allows the administrator to manually trigger the recomputation of missing or corrupt blocks and to check for files that have become irrecoverably corrupted.
the ErasureCode, which provides the encode and decode of the bytes in blocks
Why not RAID-0 and do JBOD instead?
Hadoop prefers a set of separate disks to the same set managed as a RAID-0 disk array. Read speeds are particularly important to the performance of a Hadoop cluster, and in his post, Steve makes the point that since drive speeds vary, and RAID-0 reads occur at the speed of the slowest disk in the array, a RAID-0 configuration may well be slower than a non-RAID configuration. The bigger issue, in my opinion, is reliability. If a set of disks is configured as a RAID-0 array, then one disk failure in that array will take that entire volume down, and if all the disks in a node are configured as a single RAID-0 array, then a single disk failure will take all the node’s data down. By configuring multiple disks in a RAID-0 array, you magnify the probability of that volume going offline due to a single disk failure and you maximize the amount of data that goes offline when that single failure occurs.
Space needed: Take amount of data, multiply by 3 (replication), factor in percentage reduction via compression. Then add about 30% for Hadoop operating space and overhead
Hadoop utilizes replication to protect against data loss and unavailability during disk or node outages. Thus, data node disks should be configured as JBOD. There is no need for RAID configurations for Hadoop cluster data nodes as redundancy is achieved through block replication.
For the Name Node, the recommendation is to design redundancy into the hardware configuration to protect from the server going down. Thus configuring NameNode disks as RAID is a good idea.
When scaling out hard drive space, you will have the option to add more disks to each node or to add more nodes themselves. With Hadoop, it is recommended to scale out by adding nodes than to scale out by making each machine more powerful. Adding more nodes means that replicas will be further spread out and thus will increase read/write performance, as the average network hops to get to a block in HDFS will decrease.
SSD – HDFS has not been designed to take advantage of SSD disk I/O speeds. In most cases, performance bottleneck for Hadoop clusters is not disk latencies, but is network I/O or RAM amounts. The cost of larger SSD drives are still expensive and that expense is not worth the benefit to a Hadoop data node
When configuring the disks – set noatime to improve read/write performance, and do not use LVM to make the disks appear as one.
GOOD LINKS
##########
These are the links that I used to expand my knowledge on it.
what is hadoop:
Its used by a lot of big timers – facebook uses it: http://borthakur.com/ftp/hadoopmicrosoft.pdf
http://readwrite.com/2013/05/23/hadoop-what-it-is-and-how-it-works#awesm=~obbAk0BZ81i0h5
http://fcw.com/Articles/2013/03/25/what-is-hadoop.aspx?Page=3
http://en.wikipedia.org/wiki/Hadoop
quick video: http://www.youtube.com/watch?v=1_ly9dZnmWc
quick pdf: http://hadoop.apache.org/docs/r0.18.0/hdfs_design.pdf
http://www.ibm.com/developerworks/library/l-pycon/index.html
used these to make this happen:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
other good articles: http://www.quora.com/HDFS/Is-HDFS-way-behind-GFS-If-yes-how-so
http://www.slideshare.net/YuvalCarmel/gfs-vs-hdfs
http://internetmemory.org/en/index.php/synapse/using_hadoop_for_video_streaming/
http://wiki.apache.org/hadoop/MountableHDFS
http://www.youtube.com/watch?v=HFplUBeBhcM
I asked a good question on stackoverflow for this:
http://stackoverflow.com/questions/17639931/where-does-hadoop-save-a-file-to-in-a-multi-node-setup
On googlefilesystem:
http://www.slideshare.net/tutchiio/gfs-google-file-system
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/de//archive/gfs-sosp2003.pdf
http://computer.howstuffworks.com/internet/basics/google-file-system.htm
https://www.youtube.com/watch?v=yjPBkvYh-ss&list=PL_7O_WtKQsWAa4fseScy54bicE7YLDtqX
on writing map reduce:
http://answers.oreilly.com/topic/2141-how-mapreduce-works-with-hadoop/
http://hadooptutorial.wikispaces.com/MapReduce
http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
Config eclipse for map reduce and write a sample word count: http://www.youtube.com/watch?v=TavehEdfNDk
Word count in java: http://www.youtube.com/watch?v=kM76O4cZ5_0
Hadoop MapReduce Example – How good are a city’s farmer’s markets?http://www.youtube.com/watch?v=KwW7bQRykHI
White paper with graphs:http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-690561.html
whats better then hadoop – dremel and Pregel and Percolator? http://gigaom.com/2012/07/07/why-the-days-are-numbered-for-hadoop-as-we-know-it/
ALL OF THE CONFIGS PRESENTED AGAIN – STRAIGHT FROM THE BACKUP SCRIPT ABOVE
##########################################################################
DATE Tue Jul 16 22:45:57 PDT 2013
===================================
MASTER CONFIG
=============
#### CONFIGS OF : ####
~~~Knas312 –> conf/hadoop-env.sh~~~
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.  Required.
export JAVA_HOME=/usr
# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS”
export HADOOP_SECONDARYNAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS”
export HADOOP_DATANODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS”
export HADOOP_BALANCER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS”
export HADOOP_JOBTRACKER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS”
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS
# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS=”-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR”
# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync’d from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes.  See ‘man nice’.
# export HADOOP_NICENESS=10
~~~Knas312 –> conf/core-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/Sally/hadoopdata</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://knas312:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri’s scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri’s authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
~~~Knas312 –> conf/mapred-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>knas312:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If “local”, then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>
~~~Knas312 –> conf/hdfs-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>
~~~Knas312 –> conf/masters~~~
knas312
~~~Knas312 –> conf/slaves~~~
knas312
rmstest2
will312
SLAVE CONFIGS
=============
#### CONFIGS OF rmstest2: ####
~~~rmstest2 –> conf/hadoop-env.sh~~~
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.  Required.
export JAVA_HOME=/usr
# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS”
export HADOOP_SECONDARYNAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS”
export HADOOP_DATANODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS”
export HADOOP_BALANCER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS”
export HADOOP_JOBTRACKER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS”
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS
# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS=”-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR”
# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync’d from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes.  See ‘man nice’.
# export HADOOP_NICENESS=10
~~~rmstest2 –> conf/core-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/data/hadoopdata</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://knas312:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri’s scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri’s authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
~~~rmstest2 –> conf/mapred-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>knas312:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If “local”, then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>
~~~rmstest2 –> conf/hdfs-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>
~~~rmstest2 –> conf/masters~~~
localhost
~~~rmstest2 –> conf/slaves~~~
localhost
#### CONFIGS OF will312: ####
~~~will312 –> conf/hadoop-env.sh~~~
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.  Required.
export JAVA_HOME=/usr
# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS”
export HADOOP_SECONDARYNAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS”
export HADOOP_DATANODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS”
export HADOOP_BALANCER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS”
export HADOOP_JOBTRACKER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS”
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS
# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS=”-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR”
# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync’d from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes.  See ‘man nice’.
# export HADOOP_NICENESS=10
~~~will312 –> conf/core-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/data/hadoopdata</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://knas312:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri’s scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri’s authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
~~~will312 –> conf/mapred-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>knas312:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If “local”, then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>
~~~will312 –> conf/hdfs-site.xml~~~
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>
<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>
~~~will312 –> conf/masters~~~
localhost
~~~will312 –> conf/slaves~~~
localhost
EVERY PROPERTY FROM THE API PAGE FOR EASY LOOK UP (VERY LONG OPPS) – AND ITS DEFAULT VALUE
##########################################################################################
Note how the hadoop apache people keep up with current properties. They have the originals and then the current deprecated properties.
The properties – the default values:
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml
List of deprecated properties AS OF July 16th 2013: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/DeprecatedProperties.html
yarn-default.xml  – Note didnt use yarn in my article.
=================
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!–
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the “License”); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an “AS IS” BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
–>
<!– Do not modify this file directly.  Instead, copy entries that you –>
<!– wish to modify from this file into yarn-site.xml and change them –>
<!– there.  If yarn-site.xml does not already exist, create it.      –>
<configuration>
  <!– IPC Configs –>
  <property>
    <description>Factory to create client IPC classes.</description>
    <name>yarn.ipc.client.factory.class</name>
  </property>
  <property>
    <description>Type of serialization to use.</description>
    <name>yarn.ipc.serializer.type</name>
    <value>protocolbuffers</value>
  </property>
  <property>
    <description>Factory to create server IPC classes.</description>
    <name>yarn.ipc.server.factory.class</name>
  </property>
  <property>
    <description>Factory to create IPC exceptions.</description>
    <name>yarn.ipc.exception.factory.class</name>
  </property>
  <property>
    <description>Factory to create serializeable records.</description>
    <name>yarn.ipc.record.factory.class</name>
  </property>
  <property>
    <description>RPC class implementation</description>
    <name>yarn.ipc.rpc.class</name>
    <value>org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC</value>
  </property>
  <!– Resource Manager Configs –>
  <property>
    <description>The address of the applications manager interface in the RM.</description>
    <name>yarn.resourcemanager.address</name>
    <value>0.0.0.0:8032</value>
  </property>
  <property>
    <description>The number of threads used to handle applications manager requests.</description>
    <name>yarn.resourcemanager.client.thread-count</name>
    <value>50</value>
  </property>
  <property>
    <description>The expiry interval for application master reporting.</description>
    <name>yarn.am.liveness-monitor.expiry-interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <description>The Kerberos principal for the resource manager.</description>
    <name>yarn.resourcemanager.principal</name>
  </property>
  <property>
    <description>The address of the scheduler interface.</description>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>0.0.0.0:8030</value>
  </property>
  <property>
    <description>Number of threads to handle scheduler interface.</description>
    <name>yarn.resourcemanager.scheduler.client.thread-count</name>
    <value>50</value>
  </property>
  <property>
    <description>The address of the RM web application.</description>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>0.0.0.0:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>0.0.0.0:8031</value>
  </property>
  <property>
    <description>Are acls enabled.</description>
    <name>yarn.acl.enable</name>
    <value>true</value>
  </property>
  <property>
    <description>ACL of who can be admin of the YARN cluster.</description>
    <name>yarn.admin.acl</name>
    <value>*</value>
  </property>
  <property>
    <description>The address of the RM admin interface.</description>
    <name>yarn.resourcemanager.admin.address</name>
    <value>0.0.0.0:8033</value>
  </property>
  <property>
    <description>Number of threads used to handle RM admin interface.</description>
    <name>yarn.resourcemanager.admin.client.thread-count</name>
    <value>1</value>
  </property>
  <property>
    <description>How often should the RM check that the AM is still alive.</description>
    <name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name>
    <value>1000</value>
  </property>
  <property>
    <description>The maximum number of application master retries.</description>
    <name>yarn.resourcemanager.am.max-retries</name>
    <value>1</value>
  </property>
  <property>
    <description>How often to check that containers are still alive. </description>
    <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <description>The keytab for the resource manager.</description>
    <name>yarn.resourcemanager.keytab</name>
    <value>/etc/krb5.keytab</value>
  </property>
  <property>
    <description>How long to wait until a node manager is considered dead.</description>
    <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <description>How often to check that node managers are still alive.</description>
    <name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name>
    <value>1000</value>
  </property>
  <property>
    <description>Path to file with nodes to include.</description>
    <name>yarn.resourcemanager.nodes.include-path</name>
    <value></value>
  </property>
  <property>
    <description>Path to file with nodes to exclude.</description>
    <name>yarn.resourcemanager.nodes.exclude-path</name>
    <value></value>
  </property>
  <property>
    <description>Number of threads to handle resource tracker calls.</description>
    <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>
    <value>50</value>
  </property>
  <property>
    <description>The class to use as the resource scheduler.</description>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
  </property>
  <property>
    <description>The minimum allocation for every container request at the RM,
    in MBs. Memory requests lower than this won’t take effect,
    and the specified value will get allocated at minimum.</description>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <description>The maximum allocation for every container request at the RM,
    in MBs. Memory requests higher than this won’t take effect,
    and will get capped to this value.</description>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>8192</value>
  </property>
  <property>
    <description>The minimum allocation for every container request at the RM,
    in terms of virtual CPU cores. Requests lower than this won’t take effect,
    and the specified value will get allocated the minimum.</description>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
  </property>
  <property>
    <description>The maximum allocation for every container request at the RM,
    in terms of virtual CPU cores. Requests higher than this won’t take effect,
    and will get capped to this value.</description>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>32</value>
  </property>
  <property>
    <description>Enable RM to recover state after starting. If true, then
    yarn.resourcemanager.store.class must be specified</description>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>false</value>
  </property>
  <property>
    <description>The class to use as the persistent store.</description>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value>
  </property>
  <property>
    <description>URI pointing to the location of the FileSystem path where
    RM state will be stored. This must be supplied when using
    org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
    as the value for yarn.resourcemanager.store.class</description>
    <name>yarn.resourcemanager.fs.rm-state-store.uri</name>
    <value>${hadoop.tmp.dir}/yarn/system/rmstore</value>
    <!–value>hdfs://localhost:9000/rmstore</value–>
  </property>
  <property>
    <description>The maximum number of completed applications RM keeps. </description>
    <name>yarn.resourcemanager.max-completed-applications</name>
    <value>10000</value>
  </property>
  <property>
    <description>Interval at which the delayed token removal thread runs</description>
    <name>yarn.resourcemanager.delayed.delegation-token.removal-interval-ms</name>
    <value>30000</value>
  </property>
  <property>
    <description>Interval for the roll over for the master key used to generate
        application tokens
    </description>
    <name>yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs</name>
    <value>86400</value>
  </property>
  <property>
    <description>Interval for the roll over for the master key used to generate
        container tokens. It is expected to be much greater than
        yarn.nm.liveness-monitor.expiry-interval-ms and
        yarn.rm.container-allocation.expiry-interval-ms. Otherwise the
        behavior is undefined.
    </description>
    <name>yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs</name>
    <value>86400</value>
  </property>
  <!– Node Manager Configs –>
  <property>
    <description>The address of the container manager in the NM.</description>
    <name>yarn.nodemanager.address</name>
    <value>0.0.0.0:0</value>
  </property>
  <property>
    <description>Environment variables that should be forwarded from the NodeManager’s environment to the container’s.</description>
    <name>yarn.nodemanager.admin-env</name>
    <value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX</value>
  </property>
  <property>
    <description>Environment variables that containers may override rather than use NodeManager’s default.</description>
    <name>yarn.nodemanager.env-whitelist</name>
    <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME</value>
  </property>
  <property>
    <description>who will execute(launch) the containers.</description>
    <name>yarn.nodemanager.container-executor.class</name>
    <value>org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor</value>
<!–<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>–>
  </property>
  <property>
    <description>Number of threads container manager uses.</description>
    <name>yarn.nodemanager.container-manager.thread-count</name>
    <value>20</value>
  </property>
  <property>
    <description>Number of threads used in cleanup.</description>
    <name>yarn.nodemanager.delete.thread-count</name>
    <value>4</value>
  </property>
  <property>
    <description>
      Number of seconds after an application finishes before the nodemanager’s
      DeletionService will delete the application’s localized file directory
      and log directory.
      To diagnose Yarn application problems, set this property’s value large
      enough (for example, to 600 = 10 minutes) to permit examination of these
      directories. After changing the property’s value, you must restart the
      nodemanager in order for it to have an effect.
      The roots of Yarn applications’ work directories is configurable with
      the yarn.nodemanager.local-dirs property (see below), and the roots
      of the Yarn applications’ log directories is configurable with the
      yarn.nodemanager.log-dirs property (see also below).
    </description>
    <name>yarn.nodemanager.delete.debug-delay-sec</name>
    <value>0</value>
  </property>
  <property>
    <description>Heartbeat interval to RM</description>
    <name>yarn.nodemanager.heartbeat.interval-ms</name>
    <value>1000</value>
  </property>
  <property>
    <description>Keytab for NM.</description>
    <name>yarn.nodemanager.keytab</name>
    <value>/etc/krb5.keytab</value>
  </property>
  <property>
    <description>List of directories to store localized files in. An
      application’s localized file directory will be found in:
      ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
      Individual containers’ work directories, called container_${contid}, will
      be subdirectories of this.
   </description>
    <name>yarn.nodemanager.local-dirs</name>
    <value>${hadoop.tmp.dir}/nm-local-dir</value>
  </property>
  <property>
    <description>Address where the localizer IPC is.</description>
    <name>yarn.nodemanager.localizer.address</name>
    <value>0.0.0.0:8040</value>
  </property>
  <property>
    <description>Interval in between cache cleanups.</description>
    <name>yarn.nodemanager.localizer.cache.cleanup.interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <description>Target size of localizer cache in MB, per local directory.</description>
    <name>yarn.nodemanager.localizer.cache.target-size-mb</name>
    <value>10240</value>
  </property>
  <property>
    <description>Number of threads to handle localization requests.</description>
    <name>yarn.nodemanager.localizer.client.thread-count</name>
    <value>5</value>
  </property>
  <property>
    <description>Number of threads to use for localization fetching.</description>
    <name>yarn.nodemanager.localizer.fetch.thread-count</name>
    <value>4</value>
  </property>
  <property>
    <description>
      Where to store container logs. An application’s localized log directory
      will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
      Individual containers’ log directories will be below this, in directories
      named container_{$contid}. Each container directory will contain the files
      stderr, stdin, and syslog generated by that container.
    </description>
    <name>yarn.nodemanager.log-dirs</name>
    <value>${yarn.log.dir}/userlogs</value>
  </property>
  <property>
    <description>Whether to enable log aggregation</description>
    <name>yarn.log-aggregation-enable</name>
    <value>false</value>
  </property>
  <property>
    <description>How long to keep aggregation logs before deleting them.  -1 disables.
    Be careful set this too small and you will spam the name node.</description>
    <name>yarn.log-aggregation.retain-seconds</name>
    <value>-1</value>
  </property>
  <property>
    <description>How long to wait between aggregated log retention checks.
    If set to 0 or a negative value then the value is computed as one-tenth
    of the aggregated log retention time. Be careful set this too small and
    you will spam the name node.</description>
    <name>yarn.log-aggregation.retain-check-interval-seconds</name>
    <value>-1</value>
  </property>
  <property>
    <description>Time in seconds to retain user logs. Only applicable if
    log aggregation is disabled
    </description>
    <name>yarn.nodemanager.log.retain-seconds</name>
    <value>10800</value>
  </property>
  <property>
    <description>Where to aggregate logs to.</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
  </property>
  <property>
    <description>The remote log dir will be created at
      {yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}
    </description>
    <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
    <value>logs</value>
  </property>
  <property>
    <description>Amount of physical memory, in MB, that can be allocated
    for containers.</description>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>8192</value>
  </property>
  <property>
    <description>Whether physical memory limits will be enforced for
    containers.</description>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>true</value>
  </property>
  <property>
    <description>Whether virtual memory limits will be enforced for
    containers.</description>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>true</value>
  </property>
  <property>
    <description>Ratio between virtual memory to physical memory when
    setting memory limits for containers. Container allocations are
    expressed in terms of physical memory, and virtual memory usage
    is allowed to exceed this allocation by this ratio.
    </description>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>2.1</value>
  </property>
  <property>
    <description>Number of CPU cores that can be allocated
    for containers.</description>
    <name>yarn.nodemanager.resource.cpu-cores</name>
    <value>8</value>
  </property>
  <property>
    <description>Ratio between virtual cores to physical cores when
    allocating CPU resources to containers.
    </description>
    <name>yarn.nodemanager.vcores-pcores-ratio</name>
    <value>2</value>
  </property>
  <property>
    <description>NM Webapp address.</description>
    <name>yarn.nodemanager.webapp.address</name>
    <value>0.0.0.0:8042</value>
  </property>
  <property>
    <description>How often to monitor containers.</description>
    <name>yarn.nodemanager.container-monitor.interval-ms</name>
    <value>3000</value>
  </property>
  <property>
    <description>Class that calculates containers current resource utilization.</description>
    <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
  </property>
  <property>
    <description>Frequency of running node health script.</description>
    <name>yarn.nodemanager.health-checker.interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <description>Script time out period.</description>
    <name>yarn.nodemanager.health-checker.script.timeout-ms</name>
    <value>1200000</value>
  </property>
  <property>
    <description>The health check script to run.</description>
    <name>yarn.nodemanager.health-checker.script.path</name>
    <value></value>
  </property>
  <property>
    <description>The arguments to pass to the health check script.</description>
    <name>yarn.nodemanager.health-checker.script.opts</name>
    <value></value>
  </property>
  <property>
    <description>Frequency of running disk health checker code.</description>
    <name>yarn.nodemanager.disk-health-checker.interval-ms</name>
    <value>120000</value>
  </property>
  <property>
    <description>The minimum fraction of number of disks to be healthy for the
    nodemanager to launch new containers. This correspond to both
    yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there
    are less number of healthy local-dirs (or log-dirs) available, then
    new containers will not be launched on this node.</description>
    <name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
    <value>0.25</value>
  </property>
  <property>
    <description>The path to the Linux container executor.</description>
    <name>yarn.nodemanager.linux-container-executor.path</name>
  </property>
  <property>
    <description>The class which should help the LCE handle resources.</description>
    <name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>
    <value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>
    <!– <value>org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler</value> –>
  </property>
  <property>
    <description>The cgroups hierarchy under which to place YARN proccesses (cannot contain commas).
    If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have
    been pre-configured), then this cgroups hierarchy must already exist and be writable by the
    NodeManager user, otherwise the NodeManager may fail.
    Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.</description>
    <name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name>
    <value>/hadoop-yarn</value>
  </property>
  <property>
    <description>Whether the LCE should attempt to mount cgroups if not found.
    Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.</description>
    <name>yarn.nodemanager.linux-container-executor.cgroups.mount</name>
    <value>false</value>
  </property>
  <property>
    <description>Where the LCE should attempt to mount cgroups if not found. Common locations
    include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux
    distribution in use. This path must exist before the NodeManager is launched.
    Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and
    yarn.nodemanager.linux-container-executor.cgroups.mount is true.</description>
    <name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name>
  </property>
  <property>
    <description>T-file compression types used to compress aggregated logs.</description>
    <name>yarn.nodemanager.log-aggregation.compression-type</name>
    <value>none</value>
  </property>
  <property>
    <description>The kerberos principal for the node manager.</description>
    <name>yarn.nodemanager.principal</name>
    <value></value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value></value>
    <!– <value>mapreduce.shuffle</value> –>
  </property>
  <property>
    <description>No. of ms to wait between sending a SIGTERM and SIGKILL to a container</description>
    <name>yarn.nodemanager.sleep-delay-before-sigkill.ms</name>
    <value>250</value>
  </property>
  <property>
    <description>Max time to wait for a process to come up when trying to cleanup a container</description>
    <name>yarn.nodemanager.process-kill-wait.ms</name>
    <value>2000</value>
  </property>
  <!–Map Reduce configuration–>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>mapreduce.job.jar</name>
    <value/>
  </property>
  <property>
    <name>mapreduce.job.hdfs-servers</name>
    <value>${fs.defaultFS}</value>
  </property>
  <!– WebAppProxy Configuration–>
  <property>
    <description>The kerberos principal for the proxy, if the proxy is not
    running as part of the RM.</description>
    <name>yarn.web-proxy.principal</name>
    <value/>
  </property>
  <property>
    <description>Keytab for WebAppProxy, if the proxy is not running as part of
    the RM.</description>
    <name>yarn.web-proxy.keytab</name>
  </property>
  <property>
    <description>The address for the web proxy as HOST:PORT, if this is not
     given then the proxy will run as part of the RM</description>
     <name>yarn.web-proxy.address</name>
     <value/>
  </property>
  <!– Applications’ Configuration–>
  <property>
    <description>CLASSPATH for YARN applications. A comma-separated list
    of CLASSPATH entries</description>
     <name>yarn.application.classpath</name>
     <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
  </property>
</configuration>
mapred-default.xml
==================
<?xml version=”1.0″?>
<!–
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the “License”); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an “AS IS” BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
–>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Do not modify this file directly.  Instead, copy entries that you –>
<!– wish to modify from this file into mapred-site.xml and change them –>
<!– there.  If mapred-site.xml does not already exist, create it.      –>
<configuration>
<property>
  <name>mapreduce.jobtracker.jobhistory.location</name>
  <value></value>
  <description> If job tracker is static the history files are stored
  in this single well known place. If No value is set here, by default,
  it is in the local file system at ${hadoop.log.dir}/history.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.jobhistory.task.numberprogresssplits</name>
  <value>12</value>
  <description> Every task attempt progresses from 0.0 to 1.0 [unless
  it fails or is killed].  We record, for each task attempt, certain
  statistics over each twelfth of the progress range.  You can change
  the number of intervals we divide the entire range of progress into
  by setting this property.  Higher values give more precision to the
  recorded data, but costs more memory in the job tracker at runtime.
  Each increment in this attribute costs 16 bytes per running task.
  </description>
</property>
<property>
  <name>mapreduce.job.userhistorylocation</name>
  <value></value>
  <description> User can specify a location to store the history files of
  a particular job. If nothing is specified, the logs are stored in
  output directory. The files are stored in “_logs/history/” in the directory.
  User can stop logging by giving the value “none”.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.jobhistory.completed.location</name>
  <value></value>
  <description> The completed job history files are stored at this single well
  known location. If nothing is specified, the files are stored at
  ${mapreduce.jobtracker.jobhistory.location}/done.
  </description>
</property>
<property>
  <name>mapreduce.job.committer.setup.cleanup.needed</name>
  <value>true</value>
  <description> true, if job needs job-setup and job-cleanup.
                false, otherwise
  </description>
</property>
<!– i/o properties –>
<property>
  <name>mapreduce.task.io.sort.factor</name>
  <value>10</value>
  <description>The number of streams to merge at once while sorting
  files.  This determines the number of open file handles.</description>
</property>
<property>
  <name>mapreduce.task.io.sort.mb</name>
  <value>100</value>
  <description>The total amount of buffer memory to use while sorting
  files, in megabytes.  By default, gives each merge stream 1MB, which
  should minimize seeks.</description>
</property>
<property>
  <name>mapreduce.map.sort.spill.percent</name>
  <value>0.80</value>
  <description>The soft limit in the serialization buffer. Once reached, a
  thread will begin to spill the contents to disk in the background. Note that
  collection will not block if this threshold is exceeded while a spill is
  already in progress, so spills may be larger than this threshold when it is
  set to less than .5</description>
</property>
<property>
  <name>mapreduce.jobtracker.address</name>
  <value>local</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If “local”, then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
<property>
  <name>mapreduce.local.clientfactory.class.name</name>
  <value>org.apache.hadoop.mapred.LocalClientFactory</value>
  <description>This the client factory that is responsible for
  creating local job runner client</description>
</property>
<property>
  <name>mapreduce.jobtracker.http.address</name>
  <value>0.0.0.0:50030</value>
  <description>
    The job tracker http server address and port the server will listen on.
    If the port is 0 then the server will start on a free port.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.handler.count</name>
  <value>10</value>
  <description>
    The number of server threads for the JobTracker. This should be roughly
    4% of the number of tasktracker nodes.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.report.address</name>
  <value>127.0.0.1:0</value>
  <description>The interface and port that task tracker server listens on.
  Since it is only connected to by the tasks, it uses the local interface.
  EXPERT ONLY. Should only be changed if your host does not have the loopback
  interface.</description>
</property>
<property>
  <name>mapreduce.cluster.local.dir</name>
  <value>${hadoop.tmp.dir}/mapred/local</value>
  <description>The local directory where MapReduce stores intermediate
  data files.  May be a comma-separated list of
  directories on different devices in order to spread disk i/o.
  Directories that do not exist are ignored.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.system.dir</name>
  <value>${hadoop.tmp.dir}/mapred/system</value>
  <description>The directory where MapReduce stores control files.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.staging.root.dir</name>
  <value>${hadoop.tmp.dir}/mapred/staging</value>
  <description>The root of the staging area for users’ job files
  In practice, this should be the directory where users’ home
  directories are located (usually /user)
  </description>
</property>
<property>
  <name>mapreduce.cluster.temp.dir</name>
  <value>${hadoop.tmp.dir}/mapred/temp</value>
  <description>A shared directory for temporary files.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.local.dir.minspacestart</name>
  <value>0</value>
  <description>If the space in mapreduce.cluster.local.dir drops under this,
  do not ask for more tasks.
  Value in bytes.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.local.dir.minspacekill</name>
  <value>0</value>
  <description>If the space in mapreduce.cluster.local.dir drops under this,
    do not ask more tasks until all the current ones have finished and
    cleaned up. Also, to save the rest of the tasks we have running,
    kill one of them, to clean up some space. Start with the reduce tasks,
    then go with the ones that have finished the least.
    Value in bytes.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.expire.trackers.interval</name>
  <value>600000</value>
  <description>Expert: The time-interval, in miliseconds, after which
  a tasktracker is declared ‘lost’ if it doesn’t send heartbeats.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.instrumentation</name>
  <value>org.apache.hadoop.mapred.TaskTrackerMetricsInst</value>
  <description>Expert: The instrumentation class to associate with each TaskTracker.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.resourcecalculatorplugin</name>
  <value></value>
  <description>
   Name of the class whose instance will be used to query resource information
   on the tasktracker.
   The class must be an instance of
   org.apache.hadoop.util.ResourceCalculatorPlugin. If the value is null, the
   tasktracker attempts to use a class appropriate to the platform.
   Currently, the only platform supported is Linux.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.taskmemorymanager.monitoringinterval</name>
  <value>5000</value>
  <description>The interval, in milliseconds, for which the tasktracker waits
   between two cycles of monitoring its tasks’ memory usage. Used only if
   tasks’ memory management is enabled via mapred.tasktracker.tasks.maxmemory.
   </description>
</property>
<property>
  <name>mapreduce.tasktracker.tasks.sleeptimebeforesigkill</name>
  <value>5000</value>
  <description>The time, in milliseconds, the tasktracker waits for sending a
  SIGKILL to a task, after it has been sent a SIGTERM. This is currently
  not used on WINDOWS where tasks are just sent a SIGTERM.
  </description>
</property>
<property>
  <name>mapreduce.job.maps</name>
  <value>2</value>
  <description>The default number of map tasks per job.
  Ignored when mapreduce.jobtracker.address is “local”.
  </description>
</property>
<property>
  <name>mapreduce.job.reduces</name>
  <value>1</value>
  <description>The default number of reduce tasks per job. Typically set to 99%
  of the cluster’s reduce capacity, so that if a node fails the reduces can
  still be executed in a single wave.
  Ignored when mapreduce.jobtracker.address is “local”.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.restart.recover</name>
  <value>false</value>
  <description>”true” to enable (job) recovery upon restart,
               “false” to start afresh
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.jobhistory.block.size</name>
  <value>3145728</value>
  <description>The block size of the job history file. Since the job recovery
               uses job history, its important to dump job history to disk as
               soon as possible. Note that this is an expert level parameter.
               The default value is set to 3 MB.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.taskscheduler</name>
  <value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
  <description>The class responsible for scheduling the tasks.</description>
</property>
<property>
  <name>mapreduce.job.split.metainfo.maxsize</name>
  <value>10000000</value>
  <description>The maximum permissible size of the split metainfo file.
  The JobTracker won’t attempt to read split metainfo files bigger than
  the configured value.
  No limits if set to -1.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob</name>
  <value></value>
  <description>The maximum number of running tasks for a job before
  it gets preempted. No limits if undefined.
  </description>
</property>
<property>
  <name>mapreduce.map.maxattempts</name>
  <value>4</value>
  <description>Expert: The maximum number of attempts per map task.
  In other words, framework will try to execute a map task these many number
  of times before giving up on it.
  </description>
</property>
<property>
  <name>mapreduce.reduce.maxattempts</name>
  <value>4</value>
  <description>Expert: The maximum number of attempts per reduce task.
  In other words, framework will try to execute a reduce task these many number
  of times before giving up on it.
  </description>
</property>
<property>
  <name>mapreduce.reduce.shuffle.retry-delay.max.ms</name>
  <value>60000</value>
  <description>The maximum number of ms the reducer will delay before retrying
  to download map data.
  </description>
</property>
<property>
  <name>mapreduce.reduce.shuffle.parallelcopies</name>
  <value>5</value>
  <description>The default number of parallel transfers run by reduce
  during the copy(shuffle) phase.
  </description>
</property>
<property>
  <name>mapreduce.reduce.shuffle.connect.timeout</name>
  <value>180000</value>
  <description>Expert: The maximum amount of time (in milli seconds) reduce
  task spends in trying to connect to a tasktracker for getting map output.
  </description>
</property>
<property>
  <name>mapreduce.reduce.shuffle.read.timeout</name>
  <value>180000</value>
  <description>Expert: The maximum amount of time (in milli seconds) reduce
  task waits for map output data to be available for reading after obtaining
  connection.
  </description>
</property>
<property>
  <name>mapreduce.task.timeout</name>
  <value>600000</value>
  <description>The number of milliseconds before a task will be
  terminated if it neither reads an input, writes an output, nor
  updates its status string.  A value of 0 disables the timeout.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.map.tasks.maximum</name>
  <value>2</value>
  <description>The maximum number of map tasks that will be run
  simultaneously by a task tracker.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.reduce.tasks.maximum</name>
  <value>2</value>
  <description>The maximum number of reduce tasks that will be run
  simultaneously by a task tracker.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.retiredjobs.cache.size</name>
  <value>1000</value>
  <description>The number of retired job status to keep in the cache.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.outofband.heartbeat</name>
  <value>false</value>
  <description>Expert: Set this to true to let the tasktracker send an
  out-of-band heartbeat on task-completion for better latency.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.jobhistory.lru.cache.size</name>
  <value>5</value>
  <description>The number of job history files loaded in memory. The jobs are
  loaded when they are first accessed. The cache is cleared based on LRU.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.instrumentation</name>
  <value>org.apache.hadoop.mapred.JobTrackerMetricsInst</value>
  <description>Expert: The instrumentation class to associate with each JobTracker.
  </description>
</property>
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx200m</value>
  <description>Java opts for the task tracker child processes.
  The following symbol, if present, will be interpolated: @taskid@ is replaced
  by current TaskID. Any other occurrences of ‘@’ will go unchanged.
  For example, to enable verbose gc logging to a file named for the taskid in
  /tmp and to set the heap maximum to be a gigabyte, pass a ‘value’ of:
        -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
  Usage of -Djava.library.path can cause programs to no longer function if
  hadoop native libraries are used. These values should instead be set as part
  of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and
  mapreduce.reduce.env config settings.
  </description>
</property>
<property>
  <name>mapred.child.env</name>
  <value></value>
  <description>User added environment variables for the task tracker child
  processes. Example :
  1) A=foo  This will set the env variable A to foo
  2) B=$B:c This is inherit tasktracker’s B env variable.
  </description>
</property>
<property>
  <name>mapreduce.admin.user.env</name>
  <value>LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native</value>
  <description>Expert: Additional execution environment entries for
  map and reduce task processes. This is not an additive property.
  You must preserve the original value if you want your map and
  reduce tasks to have access to native libraries (compression, etc).
  </description>
</property>
<property>
  <name>mapreduce.task.tmp.dir</name>
  <value>./tmp</value>
  <description> To set the value of tmp directory for map and reduce tasks.
  If the value is an absolute path, it is directly assigned. Otherwise, it is
  prepended with task’s working directory. The java tasks are executed with
  option -Djava.io.tmpdir=’the absolute path of the tmp dir’. Pipes and
  streaming are set with environment variable,
   TMPDIR=’the absolute path of the tmp dir’
  </description>
</property>
<property>
  <name>mapreduce.map.log.level</name>
  <value>INFO</value>
  <description>The logging level for the map task. The allowed levels are:
  OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.
  </description>
</property>
<property>
  <name>mapreduce.reduce.log.level</name>
  <value>INFO</value>
  <description>The logging level for the reduce task. The allowed levels are:
  OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.
  </description>
</property>
<property>
  <name>mapreduce.map.cpu.vcores</name>
  <value>1</value>
  <description>
      The number of virtual cores required for each map task.
  </description>
</property>
<property>
  <name>mapreduce.reduce.cpu.vcores</name>
  <value>1</value>
  <description>
      The number of virtual cores required for each reduce task.
  </description>
</property>
<property>
  <name>mapreduce.reduce.merge.inmem.threshold</name>
  <value>1000</value>
  <description>The threshold, in terms of the number of files
  for the in-memory merge process. When we accumulate threshold number of files
  we initiate the in-memory merge and spill to disk. A value of 0 or less than
  0 indicates we want to DON’T have any threshold and instead depend only on
  the ramfs’s memory consumption to trigger the merge.
  </description>
</property>
<property>
  <name>mapreduce.reduce.shuffle.merge.percent</name>
  <value>0.66</value>
  <description>The usage threshold at which an in-memory merge will be
  initiated, expressed as a percentage of the total memory allocated to
  storing in-memory map outputs, as defined by
  mapreduce.reduce.shuffle.input.buffer.percent.
  </description>
</property>
<property>
  <name>mapreduce.reduce.shuffle.input.buffer.percent</name>
  <value>0.70</value>
  <description>The percentage of memory to be allocated from the maximum heap
  size to storing map outputs during the shuffle.
  </description>
</property>
<property>
  <name>mapreduce.reduce.input.buffer.percent</name>
  <value>0.0</value>
  <description>The percentage of memory- relative to the maximum heap size- to
  retain map outputs during the reduce. When the shuffle is concluded, any
  remaining map outputs in memory must consume less than this threshold before
  the reduce can begin.
  </description>
</property>
<property>
  <name>mapreduce.reduce.shuffle.memory.limit.percent</name>
  <value>0.25</value>
  <description>Expert: Maximum percentage of the in-memory limit that a
  single shuffle can consume</description>
</property>
<property>
  <name>mapreduce.shuffle.ssl.enabled</name>
  <value>false</value>
  <description>
    Whether to use SSL for for the Shuffle HTTP endpoints.
  </description>
</property>
<property>
  <name>mapreduce.shuffle.ssl.file.buffer.size</name>
  <value>65536</value>
  <description>Buffer size for reading spills from file when using SSL.
  </description>
</property>
<property>
  <name>mapreduce.shuffle.max.connections</name>
  <value>0</value>
  <description>Max allowed connections for the shuffle.  Set to 0 (zero)
               to indicate no limit on the number of connections.
  </description>
</property>
<property>
  <name>mapreduce.reduce.markreset.buffer.percent</name>
  <value>0.0</value>
  <description>The percentage of memory -relative to the maximum heap size- to
  be used for caching values when using the mark-reset functionality.
  </description>
</property>
<property>
  <name>mapreduce.map.speculative</name>
  <value>true</value>
  <description>If true, then multiple instances of some map tasks
               may be executed in parallel.</description>
</property>
<property>
  <name>mapreduce.reduce.speculative</name>
  <value>true</value>
  <description>If true, then multiple instances of some reduce tasks
               may be executed in parallel.</description>
</property>
<property>
  <name>mapreduce.job.speculative.speculativecap</name>
  <value>0.1</value>
  <description>The max percent (0-1) of running tasks that
  can be speculatively re-executed at any time.</description>
</property>
<property>
  <name>mapreduce.job.speculative.slowtaskthreshold</name>
  <value>1.0</value>The number of standard deviations by which a task’s
  ave progress-rates must be lower than the average of all running tasks’
  for the task to be considered too slow.
  <description>
  </description>
</property>
<property>
  <name>mapreduce.job.speculative.slownodethreshold</name>
  <value>1.0</value>
  <description>The number of standard deviations by which a Task
  Tracker’s ave map and reduce progress-rates (finishTime-dispatchTime)
  must be lower than the average of all successful map/reduce task’s for
  the TT to be considered too slow to give a speculative task to.
  </description>
</property>
<property>
  <name>mapreduce.job.jvm.numtasks</name>
  <value>1</value>
  <description>How many tasks to run per jvm. If set to -1, there is
  no limit.
  </description>
</property>
<property>
  <name>mapreduce.job.ubertask.enable</name>
  <value>false</value>
  <description>Whether to enable the small-jobs “ubertask” optimization,
  which runs “sufficiently small” jobs sequentially within a single JVM.
  “Small” is defined by the following maxmaps, maxreduces, and maxbytes
  settings.  Users may override this value.
  </description>
</property>
<property>
  <name>mapreduce.job.ubertask.maxmaps</name>
  <value>9</value>
  <description>Threshold for number of maps, beyond which job is considered
  too big for the ubertasking optimization.  Users may override this value,
  but only downward.
  </description>
</property>
<property>
  <name>mapreduce.job.ubertask.maxreduces</name>
  <value>1</value>
  <description>Threshold for number of reduces, beyond which job is considered
  too big for the ubertasking optimization.  CURRENTLY THE CODE CANNOT SUPPORT
  MORE THAN ONE REDUCE and will ignore larger values.  (Zero is a valid max,
  however.)  Users may override this value, but only downward.
  </description>
</property>
<property>
  <name>mapreduce.job.ubertask.maxbytes</name>
  <value></value>
  <description>Threshold for number of input bytes, beyond which job is
  considered too big for the ubertasking optimization.  If no value is
  specified, dfs.block.size is used as a default.  Be sure to specify a
  default value in mapred-site.xml if the underlying filesystem is not HDFS.
  Users may override this value, but only downward.
  </description>
</property>
<property>
  <name>mapreduce.input.fileinputformat.split.minsize</name>
  <value>0</value>
  <description>The minimum size chunk that map input should be split
  into.  Note that some file formats may have minimum split sizes that
  take priority over this setting.</description>
</property>
<property>
  <name>mapreduce.jobtracker.maxtasks.perjob</name>
  <value>-1</value>
  <description>The maximum number of tasks for a single job.
  A value of -1 indicates that there is no maximum.  </description>
</property>
<property>
  <name>mapreduce.client.submit.file.replication</name>
  <value>10</value>
  <description>The replication level for submitted job files.  This
  should be around the square root of the number of nodes.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.dns.interface</name>
  <value>default</value>
  <description>The name of the Network Interface from which a task
  tracker should report its IP address.
  </description>
 </property>
<property>
  <name>mapreduce.tasktracker.dns.nameserver</name>
  <value>default</value>
  <description>The host name or IP address of the name server (DNS)
  which a TaskTracker should use to determine the host name used by
  the JobTracker for communication and display purposes.
  </description>
 </property>
<property>
  <name>mapreduce.tasktracker.http.threads</name>
  <value>40</value>
  <description>The number of worker threads that for the http server. This is
               used for map output fetching
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.http.address</name>
  <value>0.0.0.0:50060</value>
  <description>
    The task tracker http server address and port.
    If the port is 0 then the server will start on a free port.
  </description>
</property>
<property>
  <name>mapreduce.task.files.preserve.failedtasks</name>
  <value>false</value>
  <description>Should the files for failed tasks be kept. This should only be
               used on jobs that are failing, because the storage is never
               reclaimed. It also prevents the map outputs from being erased
               from the reduce directory as they are consumed.</description>
</property>
<!–
  <property>
  <name>mapreduce.task.files.preserve.filepattern</name>
  <value>.*_m_123456_0</value>
  <description>Keep all files from tasks whose task names match the given
               regular expression. Defaults to none.</description>
  </property>
–>
<property>
  <name>mapreduce.output.fileoutputformat.compress</name>
  <value>false</value>
  <description>Should the job outputs be compressed?
  </description>
</property>
<property>
  <name>mapreduce.output.fileoutputformat.compress.type</name>
  <value>RECORD</value>
  <description>If the job outputs are to compressed as SequenceFiles, how should
               they be compressed? Should be one of NONE, RECORD or BLOCK.
  </description>
</property>
<property>
  <name>mapreduce.output.fileoutputformat.compress.codec</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec</value>
  <description>If the job outputs are compressed, how should they be compressed?
  </description>
</property>
<property>
  <name>mapreduce.map.output.compress</name>
  <value>false</value>
  <description>Should the outputs of the maps be compressed before being
               sent across the network. Uses SequenceFile compression.
  </description>
</property>
<property>
  <name>mapreduce.map.output.compress.codec</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec</value>
  <description>If the map outputs are compressed, how should they be
               compressed?
  </description>
</property>
<property>
  <name>map.sort.class</name>
  <value>org.apache.hadoop.util.QuickSort</value>
  <description>The default sort class for sorting keys.
  </description>
</property>
<property>
  <name>mapreduce.task.userlog.limit.kb</name>
  <value>0</value>
  <description>The maximum size of user-logs of each task in KB. 0 disables the cap.
  </description>
</property>
<property>
  <name>mapreduce.job.userlog.retain.hours</name>
  <value>24</value>
  <description>The maximum time, in hours, for which the user-logs are to be
               retained after the job completion.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.hosts.filename</name>
  <value></value>
  <description>Names a file that contains the list of nodes that may
  connect to the jobtracker.  If the value is empty, all hosts are
  permitted.</description>
</property>
<property>
  <name>mapreduce.jobtracker.hosts.exclude.filename</name>
  <value></value>
  <description>Names a file that contains the list of hosts that
  should be excluded by the jobtracker.  If the value is empty, no
  hosts are excluded.</description>
</property>
<property>
  <name>mapreduce.jobtracker.heartbeats.in.second</name>
  <value>100</value>
  <description>Expert: Approximate number of heart-beats that could arrive
               at JobTracker in a second. Assuming each RPC can be processed
               in 10msec, the default value is made 100 RPCs in a second.
  </description>
</property>
<property>
  <name>mapreduce.jobtracker.tasktracker.maxblacklists</name>
  <value>4</value>
  <description>The number of blacklists for a taskTracker by various jobs
               after which the task tracker could be blacklisted across
               all jobs. The tracker will be given a tasks later
               (after a day). The tracker will become a healthy
               tracker after a restart.
  </description>
</property>
<property>
  <name>mapreduce.job.maxtaskfailures.per.tracker</name>
  <value>3</value>
  <description>The number of task-failures on a tasktracker of a given job
               after which new tasks of that job aren’t assigned to it. It
               MUST be less than mapreduce.map.maxattempts and
               mapreduce.reduce.maxattempts otherwise the failed task will
               never be tried on a different node.
  </description>
</property>
<property>
  <name>mapreduce.client.output.filter</name>
  <value>FAILED</value>
  <description>The filter for controlling the output of the task’s userlogs sent
               to the console of the JobClient.
               The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and
               ALL.
  </description>
</property>
  <property>
    <name>mapreduce.client.completion.pollinterval</name>
    <value>5000</value>
    <description>The interval (in milliseconds) between which the JobClient
    polls the JobTracker for updates about job status. You may want to set this
    to a lower value to make tests run faster on a single node system. Adjusting
    this value in production may lead to unwanted client-server traffic.
    </description>
  </property>
  <property>
    <name>mapreduce.client.progressmonitor.pollinterval</name>
    <value>1000</value>
    <description>The interval (in milliseconds) between which the JobClient
    reports status to the console and checks for job completion. You may want to set this
    to a lower value to make tests run faster on a single node system. Adjusting
    this value in production may lead to unwanted client-server traffic.
    </description>
  </property>
  <property>
    <name>mapreduce.jobtracker.persist.jobstatus.active</name>
    <value>true</value>
    <description>Indicates if persistency of job status information is
      active or not.
    </description>
  </property>
  <property>
  <name>mapreduce.jobtracker.persist.jobstatus.hours</name>
  <value>1</value>
  <description>The number of hours job status information is persisted in DFS.
    The job status information will be available after it drops of the memory
    queue and between jobtracker restarts. With a zero value the job status
    information is not persisted at all in DFS.
  </description>
</property>
  <property>
    <name>mapreduce.jobtracker.persist.jobstatus.dir</name>
    <value>/jobtracker/jobsInfo</value>
    <description>The directory where the job status information is persisted
      in a file system to be available after it drops of the memory queue and
      between jobtracker restarts.
    </description>
  </property>
  <property>
    <name>mapreduce.task.profile</name>
    <value>false</value>
    <description>To set whether the system should collect profiler
     information for some of the tasks in this job? The information is stored
     in the user log directory. The value is “true” if task profiling
     is enabled.</description>
  </property>
  <property>
    <name>mapreduce.task.profile.maps</name>
    <value>0-2</value>
    <description> To set the ranges of map tasks to profile.
    mapreduce.task.profile has to be set to true for the value to be accounted.
    </description>
  </property>
  <property>
    <name>mapreduce.task.profile.reduces</name>
    <value>0-2</value>
    <description> To set the ranges of reduce tasks to profile.
    mapreduce.task.profile has to be set to true for the value to be accounted.
    </description>
  </property>
  <property>
    <name>mapreduce.task.skip.start.attempts</name>
    <value>2</value>
    <description> The number of Task attempts AFTER which skip mode
    will be kicked off. When skip mode is kicked off, the
    tasks reports the range of records which it will process
    next, to the TaskTracker. So that on failures, TT knows which
    ones are possibly the bad records. On further executions,
    those are skipped.
    </description>
  </property>
  <property>
    <name>mapreduce.map.skip.proc.count.autoincr</name>
    <value>true</value>
    <description> The flag which if set to true,
    SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented
    by MapRunner after invoking the map function. This value must be set to
    false for applications which process the records asynchronously
    or buffer the input records. For example streaming.
    In such cases applications should increment this counter on their own.
    </description>
  </property>
  <property>
    <name>mapreduce.reduce.skip.proc.count.autoincr</name>
    <value>true</value>
    <description> The flag which if set to true,
    SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented
    by framework after invoking the reduce function. This value must be set to
    false for applications which process the records asynchronously
    or buffer the input records. For example streaming.
    In such cases applications should increment this counter on their own.
    </description>
  </property>
  <property>
    <name>mapreduce.job.skip.outdir</name>
    <value></value>
    <description> If no value is specified here, the skipped records are
    written to the output directory at _logs/skip.
    User can stop writing skipped records by giving the value “none”.
    </description>
  </property>
  <property>
    <name>mapreduce.map.skip.maxrecords</name>
    <value>0</value>
    <description> The number of acceptable skip records surrounding the bad
    record PER bad record in mapper. The number includes the bad record as well.
    To turn the feature of detection/skipping of bad records off, set the
    value to 0.
    The framework tries to narrow down the skipped range by retrying
    until this threshold is met OR all attempts get exhausted for this task.
    Set the value to Long.MAX_VALUE to indicate that framework need not try to
    narrow down. Whatever records(depends on application) get skipped are
    acceptable.
    </description>
  </property>
  <property>
    <name>mapreduce.reduce.skip.maxgroups</name>
    <value>0</value>
    <description> The number of acceptable skip groups surrounding the bad
    group PER bad group in reducer. The number includes the bad group as well.
    To turn the feature of detection/skipping of bad groups off, set the
    value to 0.
    The framework tries to narrow down the skipped range by retrying
    until this threshold is met OR all attempts get exhausted for this task.
    Set the value to Long.MAX_VALUE to indicate that framework need not try to
    narrow down. Whatever groups(depends on application) get skipped are
    acceptable.
    </description>
  </property>
  <property>
    <name>mapreduce.ifile.readahead</name>
    <value>true</value>
    <description>Configuration key to enable/disable IFile readahead.
    </description>
  </property>
  <property>
    <name>mapreduce.ifile.readahead.bytes</name>
    <value>4194304</value>
    <description>Configuration key to set the IFile readahead length in bytes.
    </description>
  </property>
<!– Proxy Configuration –>
<property>
  <name>mapreduce.jobtracker.taskcache.levels</name>
  <value>2</value>
  <description> This is the max level of the task cache. For example, if
    the level is 2, the tasks cached are at the host level and at the rack
    level.
  </description>
</property>
<property>
  <name>mapreduce.job.queuename</name>
  <value>default</value>
  <description> Queue to which a job is submitted. This must match one of the
    queues defined in mapred-queues.xml for the system. Also, the ACL setup
    for the queue must allow the current user to submit a job to the queue.
    Before specifying a queue, ensure that the system is configured with
    the queue, and access is allowed for submitting jobs to the queue.
  </description>
</property>
<property>
  <name>mapreduce.cluster.acls.enabled</name>
  <value>false</value>
  <description> Specifies whether ACLs should be checked
    for authorization of users for doing various queue and job level operations.
    ACLs are disabled by default. If enabled, access control checks are made by
    JobTracker and TaskTracker when requests are made by users for queue
    operations like submit job to a queue and kill a job in the queue and job
    operations like viewing the job-details (See mapreduce.job.acl-view-job)
    or for modifying the job (See mapreduce.job.acl-modify-job) using
    Map/Reduce APIs, RPCs or via the console and web user interfaces.
    For enabling this flag(mapreduce.cluster.acls.enabled), this is to be set
    to true in mapred-site.xml on JobTracker node and on all TaskTracker nodes.
  </description>
</property>
<property>
  <name>mapreduce.job.acl-modify-job</name>
  <value> </value>
  <description> Job specific access-control list for ‘modifying’ the job. It
    is only used if authorization is enabled in Map/Reduce by setting the
    configuration property mapreduce.cluster.acls.enabled to true.
    This specifies the list of users and/or groups who can do modification
    operations on the job. For specifying a list of users and groups the
    format to use is “user1,user2 group1,group”. If set to ‘*’, it allows all
    users/groups to modify this job. If set to ‘ ‘(i.e. space), it allows
    none. This configuration is used to guard all the modifications with respect
    to this job and takes care of all the following operations:
      o killing this job
      o killing a task of this job, failing a task of this job
      o setting the priority of this job
    Each of these operations are also protected by the per-queue level ACL
    “acl-administer-jobs” configured via mapred-queues.xml. So a caller should
    have the authorization to satisfy either the queue-level ACL or the
    job-level ACL.
    Irrespective of this ACL configuration, (a) job-owner, (b) the user who
    started the cluster, (c) members of an admin configured supergroup
    configured via mapreduce.cluster.permissions.supergroup and (d) queue
    administrators of the queue to which this job was submitted to configured
    via acl-administer-jobs for the specific queue in mapred-queues.xml can
    do all the modification operations on a job.
    By default, nobody else besides job-owner, the user who started the cluster,
    members of supergroup and queue administrators can perform modification
    operations on a job.
  </description>
</property>
<property>
  <name>mapreduce.job.acl-view-job</name>
  <value> </value>
  <description> Job specific access-control list for ‘viewing’ the job. It is
    only used if authorization is enabled in Map/Reduce by setting the
    configuration property mapreduce.cluster.acls.enabled to true.
    This specifies the list of users and/or groups who can view private details
    about the job. For specifying a list of users and groups the
    format to use is “user1,user2 group1,group”. If set to ‘*’, it allows all
    users/groups to modify this job. If set to ‘ ‘(i.e. space), it allows
    none. This configuration is used to guard some of the job-views and at
    present only protects APIs that can return possibly sensitive information
    of the job-owner like
      o job-level counters
      o task-level counters
      o tasks’ diagnostic information
      o task-logs displayed on the TaskTracker web-UI and
      o job.xml showed by the JobTracker’s web-UI
    Every other piece of information of jobs is still accessible by any other
    user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc.
    Irrespective of this ACL configuration, (a) job-owner, (b) the user who
    started the cluster, (c) members of an admin configured supergroup
    configured via mapreduce.cluster.permissions.supergroup and (d) queue
    administrators of the queue to which this job was submitted to configured
    via acl-administer-jobs for the specific queue in mapred-queues.xml can
    do all the view operations on a job.
    By default, nobody else besides job-owner, the user who started the
    cluster, memebers of supergroup and queue administrators can perform
    view operations on a job.
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.indexcache.mb</name>
  <value>10</value>
  <description> The maximum memory that a task tracker allows for the
    index cache that is used when serving map outputs to reducers.
  </description>
</property>
<property>
  <name>mapreduce.task.merge.progress.records</name>
  <value>10000</value>
  <description> The number of records to process during merge before
   sending a progress notification to the TaskTracker.
  </description>
</property>
<property>
  <name>mapreduce.job.reduce.slowstart.completedmaps</name>
  <value>0.05</value>
  <description>Fraction of the number of maps in the job which should be
  complete before reduces are scheduled for the job.
  </description>
</property>
<property>
<name>mapreduce.job.complete.cancel.delegation.tokens</name>
  <value>true</value>
  <description> if false – do not unregister/cancel delegation tokens from
    renewal, because same tokens may be used by spawned jobs
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.taskcontroller</name>
  <value>org.apache.hadoop.mapred.DefaultTaskController</value>
  <description>TaskController which is used to launch and manage task execution
  </description>
</property>
<property>
  <name>mapreduce.tasktracker.group</name>
  <value></value>
  <description>Expert: Group to which TaskTracker belongs. If
   LinuxTaskController is configured via mapreduce.tasktracker.taskcontroller,
   the group owner of the task-controller binary should be same as this group.
  </description>
</property>
<property>
  <name>mapreduce.shuffle.port</name>
  <value>8080</value>
  <description>Default port that the ShuffleHandler will run on. ShuffleHandler
   is a service run at the NodeManager to facilitate transfers of intermediate
   Map outputs to requesting Reducers.
  </description>
</property>
<property>
  <name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name>
  <value>org.apache.hadoop.mapreduce.task.reduce.Shuffle</value>
  <description>
  Name of the class whose instance will be used
  to send shuffle requests by reducetasks of this job.
  The class must be an instance of org.apache.hadoop.mapred.ShuffleConsumerPlugin.
  </description>
</property>
<!–  Node health script variables –>
<property>
  <name>mapreduce.tasktracker.healthchecker.script.path</name>
  <value></value>
  <description>Absolute path to the script which is
  periodicallyrun by the node health monitoring service to determine if
  the node is healthy or not. If the value of this key is empty or the
  file does not exist in the location configured here, the node health
  monitoring service is not started.</description>
</property>
<property>
  <name>mapreduce.tasktracker.healthchecker.interval</name>
  <value>60000</value>
  <description>Frequency of the node health script to be run,
  in milliseconds</description>
</property>
<property>
  <name>mapreduce.tasktracker.healthchecker.script.timeout</name>
  <value>600000</value>
  <description>Time after node health script should be killed if
  unresponsive and considered that the script has failed.</description>
</property>
<property>
  <name>mapreduce.tasktracker.healthchecker.script.args</name>
  <value></value>
  <description>List of arguments which are to be passed to
  node health script when it is being launched comma seperated.
  </description>
</property>
<!–  end of node health script variables –>
<!– MR YARN Application properties –>
<property>
 <name>mapreduce.job.counters.limit</name>
  <value>120</value>
  <description>Limit on the number of user counters allowed per job.
  </description>
</property>
<property>
  <name>mapreduce.framework.name</name>
  <value>local</value>
  <description>The runtime framework for executing MapReduce jobs.
  Can be one of local, classic or yarn.
  </description>
</property>
<property>
  <name>yarn.app.mapreduce.am.staging-dir</name>
  <value>/tmp/hadoop-yarn/staging</value>
  <description>The staging dir used while submitting jobs.
  </description>
</property>
<!– Job Notification Configuration –>
<property>
 <name>mapreduce.job.end-notification.url</name>
 <!–<value>http://localhost:8080/jobstatus.php?jobId=$jobId&amp;jobStatus=$jobStatus</value>–>
 <description>Indicates url which will be called on completion of job to inform
              end status of job.
              User can give at most 2 variables with URI : $jobId and $jobStatus.
              If they are present in URI, then they will be replaced by their
              respective values.
</description>
</property>
<property>
  <name>mapreduce.job.end-notification.retry.attempts</name>
  <value>0</value>
  <description>The number of times the submitter of the job wants to retry job
    end notification if it fails. This is capped by
    mapreduce.job.end-notification.max.attempts</description>
</property>
<property>
  <name>mapreduce.job.end-notification.retry.interval</name>
  <value>1000</value>
  <description>The number of milliseconds the submitter of the job wants to
    wait before job end notification is retried if it fails. This is capped by
    mapreduce.job.end-notification.max.retry.interval</description>
</property>
<property>
  <name>mapreduce.job.end-notification.max.attempts</name>
  <value>5</value>
  <final>true</final>
  <description>The maximum number of times a URL will be read for providing job
    end notification. Cluster administrators can set this to limit how long
    after end of a job, the Application Master waits before exiting. Must be
    marked as final to prevent users from overriding this.
  </description>
</property>
<property>
  <name>mapreduce.job.end-notification.max.retry.interval</name>
  <value>5000</value>
  <final>true</final>
  <description>The maximum amount of time (in milliseconds) to wait before
     retrying job end notification. Cluster administrators can set this to
     limit how long the Application Master waits before exiting. Must be marked
     as final to prevent users from overriding this.</description>
</property>
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value></value>
  <description>User added environment variables for the MR App Master
  processes. Example :
  1) A=foo  This will set the env variable A to foo
  2) B=$B:c This is inherit tasktracker’s B env variable.
  </description>
</property>
<property>
  <name>yarn.app.mapreduce.am.admin.user.env</name>
  <value></value>
  <description> Environment variables for the MR App Master
  processes for admin purposes. These values are set first and can be
  overridden by the user env (yarn.app.mapreduce.am.env) Example :
  1) A=foo  This will set the env variable A to foo
  2) B=$B:c This is inherit app master’s B env variable.
  </description>
</property>
<property>
  <name>yarn.app.mapreduce.am.command-opts</name>
  <value>-Xmx1024m</value>
  <description>Java opts for the MR App Master processes.
  The following symbol, if present, will be interpolated: @taskid@ is replaced
  by current TaskID. Any other occurrences of ‘@’ will go unchanged.
  For example, to enable verbose gc logging to a file named for the taskid in
  /tmp and to set the heap maximum to be a gigabyte, pass a ‘value’ of:
        -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
  Usage of -Djava.library.path can cause programs to no longer function if
  hadoop native libraries are used. These values should instead be set as part
  of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and
  mapreduce.reduce.env config settings.
  </description>
</property>
<property>
  <name>yarn.app.mapreduce.am.admin-command-opts</name>
  <value></value>
  <description>Java opts for the MR App Master processes for admin purposes.
  It will appears before the opts set by yarn.app.mapreduce.am.command-opts and
  thus its options can be overridden user.
  Usage of -Djava.library.path can cause programs to no longer function if
  hadoop native libraries are used. These values should instead be set as part
  of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and
  mapreduce.reduce.env config settings.
  </description>
</property>
<property>
  <name>yarn.app.mapreduce.am.job.task.listener.thread-count</name>
  <value>30</value>
  <description>The number of threads used to handle RPC calls in the
    MR AppMaster from remote tasks</description>
</property>
<property>
  <name>yarn.app.mapreduce.am.job.client.port-range</name>
  <value></value>
  <description>Range of ports that the MapReduce AM can use when binding.
    Leave blank if you want all possible ports.
    For example 50000-50050,50100-50200</description>
</property>
<property>
  <name>yarn.app.mapreduce.am.job.committer.cancel-timeout</name>
  <value>60000</value>
  <description>The amount of time in milliseconds to wait for the output
    committer to cancel an operation if the job is killed</description>
</property>
<property>
  <name>yarn.app.mapreduce.am.job.committer.commit-window</name>
  <value>10000</value>
  <description>Defines a time window in milliseconds for output commit
  operations.  If contact with the RM has occurred within this window then
  commits are allowed, otherwise the AM will not allow output commits until
  contact with the RM has been re-established.</description>
</property>
<property>
  <name>yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms</name>
  <value>1000</value>
  <description>The interval in ms at which the MR AppMaster should send
    heartbeats to the ResourceManager</description>
</property>
<property>
  <name>yarn.app.mapreduce.client-am.ipc.max-retries</name>
  <value>1</value>
  <description>The number of client retries to the AM – before reconnecting
    to the RM to fetch Application Status.</description>
</property>
<property>
  <name>yarn.app.mapreduce.client.max-retries</name>
  <value>3</value>
  <description>The number of client retries to the RM/HS/AM before
    throwing exception. This is a layer above the ipc.</description>
</property>
<property>
  <name>yarn.app.mapreduce.am.resource.mb</name>
  <value>1536</value>
  <description>The amount of memory the MR AppMaster needs.</description>
</property>
<property>
  <name>yarn.app.mapreduce.am.resource.cpu-vcores</name>
  <value>1</value>
  <description>
      The number of virtual CPU cores the MR AppMaster needs.
  </description>
</property>
<property>
  <description>CLASSPATH for MR applications. A comma-separated list
  of CLASSPATH entries</description>
   <name>mapreduce.application.classpath</name>
   <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
<property>
   <name>mapreduce.job.classloader</name>
   <value>false</value>
  <description>Whether to use a separate (isolated) classloader for
    user classes in the task JVM.</description>
</property>
<property>
   <name>mapreduce.job.classloader.system.classes</name>
   <value>java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop.</value>
  <description>A comma-separated list of classes that should be loaded from the
    system classpath, not the user-supplied JARs, when mapreduce.job.classloader
    is enabled. Names ending in ‘.’ (period) are treated as package names,
    and names starting with a ‘-‘ are treated as negative matches.
  </description>
</property>
<!– jobhistory properties –>
<property>
  <name>mapreduce.jobhistory.address</name>
  <value>0.0.0.0:10020</value>
  <description>MapReduce JobHistory Server IPC host:port</description>
</property>
<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>0.0.0.0:19888</value>
  <description>MapReduce JobHistory Server Web UI host:port</description>
</property>
<property>
  <name>mapreduce.jobhistory.keytab</name>
  <description>
    Location of the kerberos keytab file for the MapReduce
    JobHistory Server.
  </description>
  <value>/etc/security/keytab/jhs.service.keytab</value>
</property>
<property>
  <name>mapreduce.jobhistory.principal</name>
  <description>
    Kerberos principal name for the MapReduce JobHistory Server.
  </description>
  <value>jhs/_HOST@REALM.TLD</value>
</property>
<property>
  <name>mapreduce.job.map.output.collector.class</name>
  <value>org.apache.hadoop.mapred.MapTask$MapOutputBuffer</value>
  <description>
    It defines the MapOutputCollector implementation to use.
  </description>
</property>
</configuration>
hdfs-default.xml
================
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!–
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the “License”); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an “AS IS” BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
–>
<!– Do not modify this file directly.  Instead, copy entries that you –>
<!– wish to modify from this file into hdfs-site.xml and change them –>
<!– there.  If hdfs-site.xml does not already exist, create it.      –>
<configuration>
<property>
  <name>hadoop.hdfs.configuration.version</name>
  <value>1</value>
  <description>version of this configuration file</description>
</property>
<property>
  <name>dfs.namenode.logging.level</name>
  <value>info</value>
  <description>
    The logging level for dfs namenode. Other values are “dir” (trace
    namespace mutations), “block” (trace block under/over replications
    and block creations/deletions), or “all”.
  </description>
</property>
<property>
  <name>dfs.namenode.rpc-address</name>
  <value></value>
  <description>
    RPC address that handles all clients requests. In the case of HA/Federation where multiple namenodes exist,
    the name service id is added to the name e.g. dfs.namenode.rpc-address.ns1
    dfs.namenode.rpc-address.EXAMPLENAMESERVICE
    The value of this property will take the form of hdfs://nn-host1:rpc-port.
  </description>
</property>
<property>
  <name>dfs.namenode.servicerpc-address</name>
  <value></value>
  <description>
    RPC address for HDFS Services communication. BackupNode, Datanodes and all other services should be
    connecting to this address if it is configured. In the case of HA/Federation where multiple namenodes exist,
    the name service id is added to the name e.g. dfs.namenode.servicerpc-address.ns1
    dfs.namenode.rpc-address.EXAMPLENAMESERVICE
    The value of this property will take the form of hdfs://nn-host1:rpc-port.
    If the value of this property is unset the value of dfs.namenode.rpc-address will be used as the default.
  </description>
</property>
<property>
  <name>dfs.namenode.secondary.http-address</name>
  <value>0.0.0.0:50090</value>
  <description>
    The secondary namenode http server address and port.
  </description>
</property>
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:50010</value>
  <description>
    The datanode server address and port for data transfer.
  </description>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:50075</value>
  <description>
    The datanode http server address and port.
  </description>
</property>
<property>
  <name>dfs.datanode.ipc.address</name>
  <value>0.0.0.0:50020</value>
  <description>
    The datanode ipc server address and port.
  </description>
</property>
<property>
  <name>dfs.datanode.handler.count</name>
  <value>10</value>
  <description>The number of server threads for the datanode.</description>
</property>
<property>
  <name>dfs.namenode.http-address</name>
  <value>0.0.0.0:50070</value>
  <description>
    The address and the base port where the dfs namenode web ui will listen on.
  </description>
</property>
<property>
  <name>dfs.https.enable</name>
  <value>false</value>
  <description>Decide if HTTPS(SSL) is supported on HDFS
  </description>
</property>
<property>
  <name>dfs.client.https.need-auth</name>
  <value>false</value>
  <description>Whether SSL client certificate authentication is required
  </description>
</property>
<property>
  <name>dfs.https.server.keystore.resource</name>
  <value>ssl-server.xml</value>
  <description>Resource file from which ssl server keystore
  information will be extracted
  </description>
</property>
<property>
  <name>dfs.client.https.keystore.resource</name>
  <value>ssl-client.xml</value>
  <description>Resource file from which ssl client keystore
  information will be extracted
  </description>
</property>
<property>
  <name>dfs.datanode.https.address</name>
  <value>0.0.0.0:50475</value>
  <description>The datanode secure http server address and port.</description>
</property>
<property>
  <name>dfs.namenode.https-address</name>
  <value>0.0.0.0:50470</value>
  <description>The namenode secure http server address and port.</description>
</property>
 <property>
  <name>dfs.datanode.dns.interface</name>
  <value>default</value>
  <description>The name of the Network Interface from which a data node should
  report its IP address.
  </description>
 </property>
<property>
  <name>dfs.datanode.dns.nameserver</name>
  <value>default</value>
  <description>The host name or IP address of the name server (DNS)
  which a DataNode should use to determine the host name used by the
  NameNode for communication and display purposes.
  </description>
 </property>
 <property>
  <name>dfs.namenode.backup.address</name>
  <value>0.0.0.0:50100</value>
  <description>
    The backup node server address and port.
    If the port is 0 then the server will start on a free port.
  </description>
</property>
 <property>
  <name>dfs.namenode.backup.http-address</name>
  <value>0.0.0.0:50105</value>
  <description>
    The backup node http server address and port.
    If the port is 0 then the server will start on a free port.
  </description>
</property>
<property>
  <name>dfs.namenode.replication.considerLoad</name>
  <value>true</value>
  <description>Decide if chooseTarget considers the target’s load or not
  </description>
</property>
<property>
  <name>dfs.default.chunk.view.size</name>
  <value>32768</value>
  <description>The number of bytes to view for a file on the browser.
  </description>
</property>
<property>
  <name>dfs.datanode.du.reserved</name>
  <value>0</value>
  <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use.
  </description>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file://${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the name table(fsimage).  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>
<property>
  <name>dfs.namenode.name.dir.restore</name>
  <value>false</value>
  <description>Set to true to enable NameNode to attempt recovering a
      previously failed dfs.namenode.name.dir. When enabled, a recovery of any
      failed directory is attempted during checkpoint.</description>
</property>
<property>
  <name>dfs.namenode.fs-limits.max-component-length</name>
  <value>0</value>
  <description>Defines the maximum number of characters in each component
      of a path.  A value of 0 will disable the check.</description>
</property>
<property>
  <name>dfs.namenode.fs-limits.max-directory-items</name>
  <value>0</value>
  <description>Defines the maximum number of items that a directory may
      contain.  A value of 0 will disable the check.</description>
</property>
<property>
  <name>dfs.namenode.edits.dir</name>
  <value>${dfs.namenode.name.dir}</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the transaction (edits) file. If this is a comma-delimited list
      of directories then the transaction file is replicated in all of the
      directories, for redundancy. Default value is same as dfs.namenode.name.dir
  </description>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value></value>
  <description>A directory on shared storage between the multiple namenodes
  in an HA cluster. This directory will be written by the active and read
  by the standby in order to keep the namespaces synchronized. This directory
  does not need to be listed in dfs.namenode.edits.dir above. It should be
  left empty in a non-HA cluster.
  </description>
</property>
<property>
  <name>dfs.namenode.edits.journal-plugin.qjournal</name>
  <value>org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager</value>
</property>
<property>
  <name>dfs.permissions.enabled</name>
  <value>true</value>
  <description>
    If “true”, enable permission checking in HDFS.
    If “false”, permission checking is turned off,
    but all other behavior is unchanged.
    Switching from one parameter value to the other does not change the mode,
    owner or group of files or directories.
  </description>
</property>
<property>
  <name>dfs.permissions.superusergroup</name>
  <value>supergroup</value>
  <description>The name of the group of super-users.</description>
</property>
<!–
<property>
   <name>dfs.cluster.administrators</name>
   <value>ACL for the admins</value>
   <description>This configuration is used to control who can access the
                default servlets in the namenode, etc.
   </description>
</property>
–>
<property>
  <name>dfs.block.access.token.enable</name>
  <value>false</value>
  <description>
    If “true”, access tokens are used as capabilities for accessing datanodes.
    If “false”, no access tokens are checked on accessing datanodes.
  </description>
</property>
<property>
  <name>dfs.block.access.key.update.interval</name>
  <value>600</value>
  <description>
    Interval in minutes at which namenode updates its access keys.
  </description>
</property>
<property>
  <name>dfs.block.access.token.lifetime</name>
  <value>600</value>
  <description>The lifetime of access tokens in minutes.</description>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file://${hadoop.tmp.dir}/dfs/data</value>
  <description>Determines where on the local filesystem an DFS data node
  should store its blocks.  If this is a comma-delimited
  list of directories, then data will be stored in all named
  directories, typically on different devices.
  Directories that do not exist are ignored.
  </description>
</property>
<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
  <description>Permissions for the directories on on the local filesystem where
  the DFS data node store its blocks. The permissions can either be octal or
  symbolic.</description>
</property>
<property>
  <name>dfs.replication</name>
  <value>3</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
<property>
  <name>dfs.replication.max</name>
  <value>512</value>
  <description>Maximal block replication.
  </description>
</property>
<property>
  <name>dfs.namenode.replication.min</name>
  <value>1</value>
  <description>Minimal block replication.
  </description>
</property>
<property>
  <name>dfs.blocksize</name>
  <value>67108864</value>
  <description>
      The default block size for new files, in bytes.
      You can use the following suffix (case insensitive):
      k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.),
      Or provide complete size in bytes (such as 134217728 for 128 MB).
  </description>
</property>
<property>
  <name>dfs.client.block.write.retries</name>
  <value>3</value>
  <description>The number of retries for writing blocks to the data nodes,
  before we signal failure to the application.
  </description>
</property>
<property>
  <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
  <value>true</value>
  <description>
    If there is a datanode/network failure in the write pipeline,
    DFSClient will try to remove the failed datanode from the pipeline
    and then continue writing with the remaining datanodes. As a result,
    the number of datanodes in the pipeline is decreased.  The feature is
    to add new datanodes to the pipeline.
    This is a site-wide property to enable/disable the feature.
    When the cluster size is extremely small, e.g. 3 nodes or less, cluster
    administrators may want to set the policy to NEVER in the default
    configuration file or disable this feature.  Otherwise, users may
    experience an unusually high rate of pipeline failures since it is
    impossible to find new datanodes for replacement.
    See also dfs.client.block.write.replace-datanode-on-failure.policy
  </description>
</property>
<property>
  <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
  <value>DEFAULT</value>
  <description>
    This property is used only if the value of
    dfs.client.block.write.replace-datanode-on-failure.enable is true.
    ALWAYS: always add a new datanode when an existing datanode is removed.
    NEVER: never add a new datanode.
    DEFAULT:
      Let r be the replication number.
      Let n be the number of existing datanodes.
      Add a new datanode only if r is greater than or equal to 3 and either
      (1) floor(r/2) is greater than or equal to n; or
      (2) r is greater than n and the block is hflushed/appended.
  </description>
</property>
<property>
  <name>dfs.blockreport.intervalMsec</name>
  <value>21600000</value>
  <description>Determines block reporting interval in milliseconds.</description>
</property>
<property>
  <name>dfs.blockreport.initialDelay</name>  <value>0</value>
  <description>Delay for first block report in seconds.</description>
</property>
<property>
  <name>dfs.datanode.directoryscan.interval</name>
  <value>21600</value>
  <description>Interval in seconds for Datanode to scan data directories and
  reconcile the difference between blocks in memory and on the disk.
  </description>
</property>
<property>
  <name>dfs.datanode.directoryscan.threads</name>
  <value>1</value>
  <description>How many threads should the threadpool used to compile reports
  for volumes in parallel have.
  </description>
</property>
<property>
  <name>dfs.heartbeat.interval</name>
  <value>3</value>
  <description>Determines datanode heartbeat interval in seconds.</description>
</property>
<property>
  <name>dfs.namenode.handler.count</name>
  <value>10</value>
  <description>The number of server threads for the namenode.</description>
</property>
<property>
  <name>dfs.namenode.safemode.threshold-pct</name>
  <value>0.999f</value>
  <description>
    Specifies the percentage of blocks that should satisfy
    the minimal replication requirement defined by dfs.namenode.replication.min.
    Values less than or equal to 0 mean not to wait for any particular
    percentage of blocks before exiting safemode.
    Values greater than 1 will make safe mode permanent.
  </description>
</property>
<property>
  <name>dfs.namenode.safemode.min.datanodes</name>
  <value>0</value>
  <description>
    Specifies the number of datanodes that must be considered alive
    before the name node exits safemode.
    Values less than or equal to 0 mean not to take the number of live
    datanodes into account when deciding whether to remain in safe mode
    during startup.
    Values greater than the number of datanodes in the cluster
    will make safe mode permanent.
  </description>
</property>
<property>
  <name>dfs.namenode.safemode.extension</name>
  <value>30000</value>
  <description>
    Determines extension of safe mode in milliseconds
    after the threshold level is reached.
  </description>
</property>
<property>
  <name>dfs.datanode.balance.bandwidthPerSec</name>
  <value>1048576</value>
  <description>
        Specifies the maximum amount of bandwidth that each datanode
        can utilize for the balancing purpose in term of
        the number of bytes per second.
  </description>
</property>
<property>
  <name>dfs.hosts</name>
  <value></value>
  <description>Names a file that contains a list of hosts that are
  permitted to connect to the namenode. The full pathname of the file
  must be specified.  If the value is empty, all hosts are
  permitted.</description>
</property>
<property>
  <name>dfs.hosts.exclude</name>
  <value></value>
  <description>Names a file that contains a list of hosts that are
  not permitted to connect to the namenode.  The full pathname of the
  file must be specified.  If the value is empty, no hosts are
  excluded.</description>
</property>
<property>
  <name>dfs.namenode.max.objects</name>
  <value>0</value>
  <description>The maximum number of files, directories and blocks
  dfs supports. A value of zero indicates no limit to the number
  of objects that dfs supports.
  </description>
</property>
<property>
  <name>dfs.namenode.decommission.interval</name>
  <value>30</value>
  <description>Namenode periodicity in seconds to check if decommission is
  complete.</description>
</property>
<property>
  <name>dfs.namenode.decommission.nodes.per.interval</name>
  <value>5</value>
  <description>The number of nodes namenode checks if decommission is complete
  in each dfs.namenode.decommission.interval.</description>
</property>
<property>
  <name>dfs.namenode.replication.interval</name>
  <value>3</value>
  <description>The periodicity in seconds with which the namenode computes
  repliaction work for datanodes. </description>
</property>
<property>
  <name>dfs.namenode.accesstime.precision</name>
  <value>3600000</value>
  <description>The access time for HDFS file is precise upto this value.
               The default value is 1 hour. Setting a value of 0 disables
               access times for HDFS.
  </description>
</property>
<property>
  <name>dfs.datanode.plugins</name>
  <value></value>
  <description>Comma-separated list of datanode plug-ins to be activated.
  </description>
</property>
<property>
  <name>dfs.namenode.plugins</name>
  <value></value>
  <description>Comma-separated list of namenode plug-ins to be activated.
  </description>
</property>
<property>
  <name>dfs.stream-buffer-size</name>
  <value>4096</value>
  <description>The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.</description>
</property>
<property>
  <name>dfs.bytes-per-checksum</name>
  <value>512</value>
  <description>The number of bytes per checksum.  Must not be larger than
  dfs.stream-buffer-size</description>
</property>
<property>
  <name>dfs.client-write-packet-size</name>
  <value>65536</value>
  <description>Packet size for clients to write</description>
</property>
<property>
  <name>dfs.namenode.checkpoint.dir</name>
  <value>file://${hadoop.tmp.dir}/dfs/namesecondary</value>
  <description>Determines where on the local filesystem the DFS secondary
      name node should store the temporary images to merge.
      If this is a comma-delimited list of directories then the image is
      replicated in all of the directories for redundancy.
  </description>
</property>
<property>
  <name>dfs.namenode.checkpoint.edits.dir</name>
  <value>${dfs.namenode.checkpoint.dir}</value>
  <description>Determines where on the local filesystem the DFS secondary
      name node should store the temporary edits to merge.
      If this is a comma-delimited list of directoires then teh edits is
      replicated in all of the directoires for redundancy.
      Default value is same as dfs.namenode.checkpoint.dir
  </description>
</property>
<property>
  <name>dfs.namenode.checkpoint.period</name>
  <value>3600</value>
  <description>The number of seconds between two periodic checkpoints.
  </description>
</property>
<property>
  <name>dfs.namenode.checkpoint.txns</name>
  <value>40000</value>
  <description>The Secondary NameNode or CheckpointNode will create a checkpoint
  of the namespace every ‘dfs.namenode.checkpoint.txns’ transactions, regardless
  of whether ‘dfs.namenode.checkpoint.period’ has expired.
  </description>
</property>
<property>
  <name>dfs.namenode.checkpoint.check.period</name>
  <value>60</value>
  <description>The SecondaryNameNode and CheckpointNode will poll the NameNode
  every ‘dfs.namenode.checkpoint.check.period’ seconds to query the number
  of uncheckpointed transactions.
  </description>
</property>
<property>
  <name>dfs.namenode.checkpoint.max-retries</name>
  <value>3</value>
  <description>The SecondaryNameNode retries failed checkpointing. If the
  failure occurs while loading fsimage or replaying edits, the number of
  retries is limited by this variable.
  </description>
</property>
<property>
  <name>dfs.namenode.num.checkpoints.retained</name>
  <value>2</value>
  <description>The number of image checkpoint files that will be retained by
  the NameNode and Secondary NameNode in their storage directories. All edit
  logs necessary to recover an up-to-date namespace from the oldest retained
  checkpoint will also be retained.
  </description>
</property>
<property>
  <name>dfs.namenode.num.extra.edits.retained</name>
  <value>1000000</value>
  <description>The number of extra transactions which should be retained
  beyond what is minimally necessary for a NN restart. This can be useful for
  audit purposes or for an HA setup where a remote Standby Node may have
  been offline for some time and need to have a longer backlog of retained
  edits in order to start again.
  Typically each edit is on the order of a few hundred bytes, so the default
  of 1 million edits should be on the order of hundreds of MBs or low GBs.
  NOTE: Fewer extra edits may be retained than value specified for this setting
  if doing so would mean that more segments would be retained than the number
  configured by dfs.namenode.max.extra.edits.segments.retained.
  </description>
</property>
<property>
  <name>dfs.namenode.max.extra.edits.segments.retained</name>
  <value>10000</value>
  <description>The maximum number of extra edit log segments which should be retained
  beyond what is minimally necessary for a NN restart. When used in conjunction with
  dfs.namenode.num.extra.edits.retained, this configuration property serves to cap
  the number of extra edits files to a reasonable value.
  </description>
</property>
<property>
  <name>dfs.namenode.delegation.key.update-interval</name>
  <value>86400000</value>
  <description>The update interval for master key for delegation tokens
       in the namenode in milliseconds.
  </description>
</property>
<property>
  <name>dfs.namenode.delegation.token.max-lifetime</name>
  <value>604800000</value>
  <description>The maximum lifetime in milliseconds for which a delegation
      token is valid.
  </description>
</property>
<property>
  <name>dfs.namenode.delegation.token.renew-interval</name>
  <value>86400000</value>
  <description>The renewal interval for delegation token in milliseconds.
  </description>
</property>
<property>
  <name>dfs.datanode.failed.volumes.tolerated</name>
  <value>0</value>
  <description>The number of volumes that are allowed to
  fail before a datanode stops offering service. By default
  any volume failure will cause a datanode to shutdown.
  </description>
</property>
<property>
  <name>dfs.image.compress</name>
  <value>false</value>
  <description>Should the dfs image be compressed?
  </description>
</property>
<property>
  <name>dfs.image.compression.codec</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec</value>
  <description>If the dfs image is compressed, how should they be compressed?
               This has to be a codec defined in io.compression.codecs.
  </description>
</property>
<property>
  <name>dfs.image.transfer.bandwidthPerSec</name>
  <value>0</value>
  <description>
        Specifies the maximum amount of bandwidth that can be utilized for image
        transfer in term of the number of bytes per second.
        A default value of 0 indicates that throttling is disabled.
  </description>
</property>
<property>
  <name>dfs.namenode.support.allow.format</name>
  <value>true</value>
  <description>Does HDFS namenode allow itself to be formatted?
               You may consider setting this to false for any production
               cluster, to avoid any possibility of formatting a running DFS.
  </description>
</property>
<property>
  <name>dfs.datanode.max.transfer.threads</name>
  <value>4096</value>
  <description>
        Specifies the maximum number of threads to use for transferring data
        in and out of the DN.
  </description>
</property>
<property>
  <name>dfs.datanode.readahead.bytes</name>
  <value>4193404</value>
  <description>
        While reading block files, if the Hadoop native libraries are available,
        the datanode can use the posix_fadvise system call to explicitly
        page data into the operating system buffer cache ahead of the current
        reader’s position. This can improve performance especially when
        disks are highly contended.
        This configuration specifies the number of bytes ahead of the current
        read position which the datanode will attempt to read ahead. This
        feature may be disabled by configuring this property to 0.
        If the native libraries are not available, this configuration has no
        effect.
  </description>
</property>
<property>
  <name>dfs.datanode.drop.cache.behind.reads</name>
  <value>false</value>
  <description>
        In some workloads, the data read from HDFS is known to be significantly
        large enough that it is unlikely to be useful to cache it in the
        operating system buffer cache. In this case, the DataNode may be
        configured to automatically purge all data from the buffer cache
        after it is delivered to the client. This behavior is automatically
        disabled for workloads which read only short sections of a block
        (e.g HBase random-IO workloads).
        This may improve performance for some workloads by freeing buffer
        cache spage usage for more cacheable data.
        If the Hadoop native libraries are not available, this configuration
        has no effect.
  </description>
</property>
<property>
  <name>dfs.datanode.drop.cache.behind.writes</name>
  <value>false</value>
  <description>
        In some workloads, the data written to HDFS is known to be significantly
        large enough that it is unlikely to be useful to cache it in the
        operating system buffer cache. In this case, the DataNode may be
        configured to automatically purge all data from the buffer cache
        after it is written to disk.
        This may improve performance for some workloads by freeing buffer
        cache spage usage for more cacheable data.
        If the Hadoop native libraries are not available, this configuration
        has no effect.
  </description>
</property>
<property>
  <name>dfs.datanode.sync.behind.writes</name>
  <value>false</value>
  <description>
        If this configuration is enabled, the datanode will instruct the
        operating system to enqueue all written data to the disk immediately
        after it is written. This differs from the usual OS policy which
        may wait for up to 30 seconds before triggering writeback.
        This may improve performance for some workloads by smoothing the
        IO profile for data written to disk.
        If the Hadoop native libraries are not available, this configuration
        has no effect.
  </description>
</property>
<property>
  <name>dfs.client.failover.max.attempts</name>
  <value>15</value>
  <description>
    Expert only. The number of client failover attempts that should be
    made before the failover is considered failed.
  </description>
</property>
<property>
  <name>dfs.client.failover.sleep.base.millis</name>
  <value>500</value>
  <description>
    Expert only. The time to wait, in milliseconds, between failover
    attempts increases exponentially as a function of the number of
    attempts made so far, with a random factor of +/- 50%. This option
    specifies the base value used in the failover calculation. The
    first failover will retry immediately. The 2nd failover attempt
    will delay at least dfs.client.failover.sleep.base.millis
    milliseconds. And so on.
  </description>
</property>
<property>
  <name>dfs.client.failover.sleep.max.millis</name>
  <value>15000</value>
  <description>
    Expert only. The time to wait, in milliseconds, between failover
    attempts increases exponentially as a function of the number of
    attempts made so far, with a random factor of +/- 50%. This option
    specifies the maximum value to wait between failovers.
    Specifically, the time between two failover attempts will not
    exceed +/- 50% of dfs.client.failover.sleep.max.millis
    milliseconds.
  </description>
</property>
<property>
  <name>dfs.client.failover.connection.retries</name>
  <value>0</value>
  <description>
    Expert only. Indicates the number of retries a failover IPC client
    will make to establish a server connection.
  </description>
</property>
<property>
  <name>dfs.client.failover.connection.retries.on.timeouts</name>
  <value>0</value>
  <description>
    Expert only. The number of retry attempts a failover IPC client
    will make on socket timeout when establishing a server connection.
  </description>
</property>
<property>
  <name>dfs.nameservices</name>
  <value></value>
  <description>
    Comma-separated list of nameservices.
  </description>
</property>
<property>
  <name>dfs.nameservice.id</name>
  <value></value>
  <description>
    The ID of this nameservice. If the nameservice ID is not
    configured or more than one nameservice is configured for
    dfs.nameservices it is determined automatically by
    matching the local node’s address with the configured address.
  </description>
</property>
<property>
  <name>dfs.ha.namenodes.EXAMPLENAMESERVICE</name>
  <value></value>
  <description>
    The prefix for a given nameservice, contains a comma-separated
    list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).
  </description>
</property>
<property>
  <name>dfs.ha.namenode.id</name>
  <value></value>
  <description>
    The ID of this namenode. If the namenode ID is not configured it
    is determined automatically by matching the local node’s address
    with the configured address.
  </description>
</property>
<property>
  <name>dfs.ha.log-roll.period</name>
  <value>120</value>
  <description>
    How often, in seconds, the StandbyNode should ask the active to
    roll edit logs. Since the StandbyNode only reads from finalized
    log segments, the StandbyNode will only be as up-to-date as how
    often the logs are rolled. Note that failover triggers a log roll
    so the StandbyNode will be up to date before it becomes active.
  </description>
</property>
<property>
  <name>dfs.ha.tail-edits.period</name>
  <value>60</value>
  <description>
    How often, in seconds, the StandbyNode should check for new
    finalized log segments in the shared edits log.
  </description>
</property>
<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>false</value>
  <description>
    Whether automatic failover is enabled. See the HDFS High
    Availability documentation for details on automatic HA
    configuration.
  </description>
</property>
<property>
  <name>dfs.support.append</name>
  <value>true</value>
  <description>
    Does HDFS allow appends to files?
  </description>
</property>
<property>
  <name>dfs.client.use.datanode.hostname</name>
  <value>false</value>
  <description>Whether clients should use datanode hostnames when
    connecting to datanodes.
  </description>
</property>
<property>
  <name>dfs.datanode.use.datanode.hostname</name>
  <value>false</value>
  <description>Whether datanodes should use datanode hostnames when
    connecting to other datanodes for data transfer.
  </description>
</property>
<property>
  <name>dfs.client.local.interfaces</name>
  <value></value>
  <description>A comma separated list of network interface names to use
    for data transfer between the client and datanodes. When creating
    a connection to read from or write to a datanode, the client
    chooses one of the specified interfaces at random and binds its
    socket to the IP of that interface. Individual names may be
    specified as either an interface name (eg “eth0”), a subinterface
    name (eg “eth0:0”), or an IP address (which may be specified using
    CIDR notation to match a range of IPs).
  </description>
</property>
<property>
  <name>dfs.namenode.kerberos.internal.spnego.principal</name>
  <value>${dfs.web.authentication.kerberos.principal}</value>
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
  <value>${dfs.web.authentication.kerberos.principal}</value>
</property>
<property>
  <name>dfs.namenode.avoid.read.stale.datanode</name>
  <value>false</value>
  <description>
    Indicate whether or not to avoid reading from &quot;stale&quot; datanodes whose
    heartbeat messages have not been received by the namenode
    for more than a specified time interval. Stale datanodes will be
    moved to the end of the node list returned for reading. See
    dfs.namenode.avoid.write.stale.datanode for a similar setting for writes.
  </description>
</property>
<property>
  <name>dfs.namenode.avoid.write.stale.datanode</name>
  <value>false</value>
  <description>
    Indicate whether or not to avoid writing to &quot;stale&quot; datanodes whose
    heartbeat messages have not been received by the namenode
    for more than a specified time interval. Writes will avoid using
    stale datanodes unless more than a configured ratio
    (dfs.namenode.write.stale.datanode.ratio) of datanodes are marked as
    stale. See dfs.namenode.avoid.read.stale.datanode for a similar setting
    for reads.
  </description>
</property>
<property>
  <name>dfs.namenode.stale.datanode.interval</name>
  <value>30000</value>
  <description>
    Default time interval for marking a datanode as “stale”, i.e., if
    the namenode has not received heartbeat msg from a datanode for
    more than this time interval, the datanode will be marked and treated
    as “stale” by default. The stale interval cannot be too small since
    otherwise this may cause too frequent change of stale states.
    We thus set a minimum stale interval value (the default value is 3 times
    of heartbeat interval) and guarantee that the stale interval cannot be less
    than the minimum value.
  </description>
</property>
<property>
  <name>dfs.namenode.write.stale.datanode.ratio</name>
  <value>0.5f</value>
  <description>
    When the ratio of number stale datanodes to total datanodes marked
    is greater than this ratio, stop avoiding writing to stale nodes so
    as to prevent causing hotspots.
  </description>
</property>
<property>
  <name>dfs.namenode.invalidate.work.pct.per.iteration</name>
  <value>0.32f</value>
  <description>
    *Note*: Advanced property. Change with caution.
    This determines the percentage amount of block
    invalidations (deletes) to do over a single DN heartbeat
    deletion command. The final deletion count is determined by applying this
    percentage to the number of live nodes in the system.
    The resultant number is the number of blocks from the deletion list
    chosen for proper invalidation over a single heartbeat of a single DN.
    Value should be a positive, non-zero percentage in float notation (X.Yf),
    with 1.0f meaning 100%.
  </description>
</property>
<property>
  <name>dfs.namenode.replication.work.multiplier.per.iteration</name>
  <value>2</value>
  <description>
    *Note*: Advanced property. Change with caution.
    This determines the total amount of block transfers to begin in
    parallel at a DN, for replication, when such a command list is being
    sent over a DN heartbeat by the NN. The actual number is obtained by
    multiplying this multiplier with the total number of live nodes in the
    cluster. The result number is the number of blocks to begin transfers
    immediately for, per DN heartbeat. This number can be any positive,
    non-zero integer.
  </description>
</property>
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>false</value>
  <description>
    Enable WebHDFS (REST API) in Namenodes and Datanodes.
  </description>
</property>
<property>
  <name>hadoop.fuse.connection.timeout</name>
  <value>300</value>
  <description>
    The minimum number of seconds that we’ll cache libhdfs connection objects
    in fuse_dfs. Lower values will result in lower memory consumption; higher
    values may speed up access by avoiding the overhead of creating new
    connection objects.
  </description>
</property>
<property>
  <name>hadoop.fuse.timer.period</name>
  <value>5</value>
  <description>
    The number of seconds between cache expiry checks in fuse_dfs. Lower values
    will result in fuse_dfs noticing changes to Kerberos ticket caches more
    quickly.
  </description>
</property>
<property>
  <name>dfs.metrics.percentiles.intervals</name>
  <value></value>
  <description>
    Comma-delimited set of integers denoting the desired rollover intervals
    (in seconds) for percentile latency metrics on the Namenode and Datanode.
    By default, percentile latency metrics are disabled.
  </description>
</property>
<property>
  <name>dfs.encrypt.data.transfer</name>
  <value>false</value>
  <description>
    Whether or not actual block data that is read/written from/to HDFS should
    be encrypted on the wire. This only needs to be set on the NN and DNs,
    clients will deduce this automatically.
  </description>
</property>
<property>
  <name>dfs.encrypt.data.transfer.algorithm</name>
  <value></value>
  <description>
    This value may be set to either “3des” or “rc4”. If nothing is set, then
    the configured JCE default on the system is used (usually 3DES.) It is
    widely believed that 3DES is more cryptographically secure, but RC4 is
    substantially faster.
  </description>
</property>
<property>
  <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
  <value>false</value>
  <description>
    Boolean which enables backend datanode-side support for the experimental DistributedFileSystem#getFileVBlockStorageLocations API.
  </description>
</property>
<property>
  <name>dfs.client.file-block-storage-locations.num-threads</name>
  <value>10</value>
  <description>
    Number of threads used for making parallel RPCs in DistributedFileSystem#getFileBlockStorageLocations().
  </description>
</property>
<property>
  <name>dfs.client.file-block-storage-locations.timeout</name>
  <value>60</value>
  <description>
    Timeout (in seconds) for the parallel RPCs made in DistributedFileSystem#getFileBlockStorageLocations().
  </description>
</property>
<property>
  <name>dfs.journalnode.rpc-address</name>
  <value>0.0.0.0:8485</value>
  <description>
    The JournalNode RPC server address and port.
  </description>
</property>
<property>
  <name>dfs.journalnode.http-address</name>
  <value>0.0.0.0:8480</value>
  <description>
    The address and port the JournalNode web UI listens on.
    If the port is 0 then the server will start on a free port.
  </description>
</property>
<property>
  <name>dfs.namenode.audit.loggers</name>
  <value>default</value>
  <description>
    List of classes implementing audit loggers that will receive audit events.
    These should be implementations of org.apache.hadoop.hdfs.server.namenode.AuditLogger.
    The special value “default” can be used to reference the default audit
    logger, which uses the configured log system. Installing custom audit loggers
    may affect the performance and stability of the NameNode. Refer to the custom
    logger’s documentation for more details.
  </description>
</property>
</configuration>
core-default.xml
================
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!–
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the “License”); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an “AS IS” BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
–>
<!– Do not modify this file directly.  Instead, copy entries that you –>
<!– wish to modify from this file into core-site.xml and change them –>
<!– there.  If core-site.xml does not already exist, create it.      –>
<configuration>
<!— global properties –>
<property>
  <name>hadoop.common.configuration.version</name>
  <value>0.23.0</value>
  <description>version of this configuration file</description>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/tmp/hadoop-${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>io.native.lib.available</name>
  <value>true</value>
  <description>Should native hadoop libraries, if present, be used.</description>
</property>
<property>
  <name>hadoop.http.filter.initializers</name>
  <value>org.apache.hadoop.http.lib.StaticUserWebFilter</value>
  <description>A comma separated list of class names. Each class in the list
  must extend org.apache.hadoop.http.FilterInitializer. The corresponding
  Filter will be initialized. Then, the Filter will be applied to all user
  facing jsp and servlet web pages.  The ordering of the list defines the
  ordering of the filters.</description>
</property>
<!— security properties –>
<property>
  <name>hadoop.security.authorization</name>
  <value>false</value>
  <description>Is service-level authorization enabled?</description>
</property>
<property>
  <name>hadoop.security.instrumentation.requires.admin</name>
  <value>false</value>
  <description>
    Indicates if administrator ACLs are required to access
    instrumentation servlets (JMX, METRICS, CONF, STACKS).
  </description>
</property>
<property>
  <name>hadoop.security.authentication</name>
  <value>simple</value>
  <description>Possible values are simple (no authentication), and kerberos
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping</name>
  <value>org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback</value>
  <description>
    Class for user to group mapping (get groups for a given user) for ACL.
    The default implementation,
    org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback,
    will determine if the Java Native Interface (JNI) is available. If JNI is
    available the implementation will use the API within hadoop to resolve a
    list of groups for a user. If JNI is not available then the shell
    implementation, ShellBasedUnixGroupsMapping, is used.  This implementation
    shells out to the Linux/Unix environment with the
    <code>bash -c groups</code> command to resolve a list of groups for a user.
  </description>
</property>
<property>
  <name>hadoop.security.groups.cache.secs</name>
  <value>300</value>
  <description>
    This is the config controlling the validity of the entries in the cache
    containing the user->group mapping. When this duration has expired,
    then the implementation of the group mapping provider is invoked to get
    the groups of the user and then cached back.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.url</name>
  <value></value>
  <description>
    The URL of the LDAP server to use for resolving user groups when using
    the LdapGroupsMapping user to group mapping.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.ssl</name>
  <value>false</value>
  <description>
    Whether or not to use SSL when connecting to the LDAP server.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.ssl.keystore</name>
  <value></value>
  <description>
    File path to the SSL keystore that contains the SSL certificate required
    by the LDAP server.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.ssl.keystore.password.file</name>
  <value></value>
  <description>
    The path to a file containing the password of the LDAP SSL keystore.
    IMPORTANT: This file should be readable only by the Unix user running
    the daemons.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.bind.user</name>
  <value></value>
  <description>
    The distinguished name of the user to bind as when connecting to the LDAP
    server. This may be left blank if the LDAP server supports anonymous binds.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.bind.password.file</name>
  <value></value>
  <description>
    The path to a file containing the password of the bind user.
    IMPORTANT: This file should be readable only by the Unix user running
    the daemons.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.base</name>
  <value></value>
  <description>
    The search base for the LDAP connection. This is a distinguished name,
    and will typically be the root of the LDAP directory.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.search.filter.user</name>
  <value>(&amp;(objectClass=user)(sAMAccountName={0}))</value>
  <description>
    An additional filter to use when searching for LDAP users. The default will
    usually be appropriate for Active Directory installations. If connecting to
    an LDAP server with a non-AD schema, this should be replaced with
    (&amp;(objectClass=inetOrgPerson)(uid={0}). {0} is a special string used to
    denote where the username fits into the filter.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.search.filter.group</name>
  <value>(objectClass=group)</value>
  <description>
    An additional filter to use when searching for LDAP groups. This should be
    changed when resolving groups against a non-Active Directory installation.
    posixGroups are currently not a supported group class.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.search.attr.member</name>
  <value>member</value>
  <description>
    The attribute of the group object that identifies the users that are
    members of the group. The default will usually be appropriate for
    any LDAP installation.
  </description>
</property>
<property>
  <name>hadoop.security.group.mapping.ldap.search.attr.group.name</name>
  <value>cn</value>
  <description>
    The attribute of the group object that identifies the group name. The
    default will usually be appropriate for all LDAP systems.
  </description>
</property>
<property>
  <name>hadoop.security.service.user.name.key</name>
  <value></value>
  <description>
    For those cases where the same RPC protocol is implemented by multiple
    servers, this configuration is required for specifying the principal
    name to use for the service when the client wishes to make an RPC call.
  </description>
</property>
<property>
    <name>hadoop.security.uid.cache.secs</name>
    <value>14400</value>
    <description>
        This is the config controlling the validity of the entries in the cache
        containing the userId to userName and groupId to groupName used by
        NativeIO getFstat().
    </description>
</property>
<property>
  <name>hadoop.rpc.protection</name>
  <value>authentication</value>
  <description>This field sets the quality of protection for secured sasl
      connections. Possible values are authentication, integrity and privacy.
      authentication means authentication only and no integrity or privacy;
      integrity implies authentication and integrity are enabled; and privacy
      implies all of authentication, integrity and privacy are enabled.
  </description>
</property>
<property>
  <name>hadoop.work.around.non.threadsafe.getpwuid</name>
  <value>false</value>
  <description>Some operating systems or authentication modules are known to
  have broken implementations of getpwuid_r and getpwgid_r, such that these
  calls are not thread-safe. Symptoms of this problem include JVM crashes
  with a stack trace inside these functions. If your system exhibits this
  issue, enable this configuration parameter to include a lock around the
  calls as a workaround.
  An incomplete list of some systems known to have this issue is available
  at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations
  </description>
</property>
<property>
  <name>hadoop.kerberos.kinit.command</name>
  <value>kinit</value>
  <description>Used to periodically renew Kerberos credentials when provided
  to Hadoop. The default setting assumes that kinit is in the PATH of users
  running the Hadoop client. Change this to the absolute path to kinit if this
  is not the case.
  </description>
</property>
<property>
  <name>hadoop.security.auth_to_local</name>
  <value></value>
  <description>Maps kerberos principals to local user names</description>
</property>
<!– i/o properties –>
<property>
  <name>io.file.buffer.size</name>
  <value>4096</value>
  <description>The size of buffer for use in sequence files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.</description>
</property>
<property>
  <name>io.bytes.per.checksum</name>
  <value>512</value>
  <description>The number of bytes per checksum.  Must not be larger than
  io.file.buffer.size.</description>
</property>
<property>
  <name>io.skip.checksum.errors</name>
  <value>false</value>
  <description>If true, when a checksum error is encountered while
  reading a sequence file, entries are skipped, instead of throwing an
  exception.</description>
</property>
<property>
  <name>io.compression.codecs</name>
  <value></value>
  <description>A comma-separated list of the compression codec classes that can
  be used for compression/decompression. In addition to any classes specified
  with this property (which take precedence), codec classes on the classpath
  are discovered using a Java ServiceLoader.</description>
</property>
<property>
  <name>io.compression.codec.bzip2.library</name>
  <value>system-native</value>
  <description>The native-code library to be used for compression and
  decompression by the bzip2 codec.  This library could be specified
  either by by name or the full pathname.  In the former case, the
  library is located by the dynamic linker, usually searching the
  directories specified in the environment variable LD_LIBRARY_PATH.
  The value of “system-native” indicates that the default system
  library should be used.  To indicate that the algorithm should
  operate entirely in Java, specify “java-builtin”.</description>
</property>
<property>
  <name>io.serializations</name>
  <value>org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization</value>
  <description>A list of serialization classes that can be used for
  obtaining serializers and deserializers.</description>
</property>
<property>
  <name>io.seqfile.local.dir</name>
  <value>${hadoop.tmp.dir}/io/local</value>
  <description>The local directory where sequence file stores intermediate
  data files during merge.  May be a comma-separated list of
  directories on different devices in order to spread disk i/o.
  Directories that do not exist are ignored.
  </description>
</property>
<property>
  <name>io.map.index.skip</name>
  <value>0</value>
  <description>Number of index entries to skip between each entry.
  Zero by default. Setting this to values larger than zero can
  facilitate opening large MapFiles using less memory.</description>
</property>
<property>
  <name>io.map.index.interval</name>
  <value>128</value>
  <description>
    MapFile consist of two files – data file (tuples) and index file
    (keys). For every io.map.index.interval records written in the
    data file, an entry (record-key, data-file-position) is written
    in the index file. This is to allow for doing binary search later
    within the index file to look up records by their keys and get their
    closest positions in the data file.
  </description>
</property>
<!– file system properties –>
<property>
  <name>fs.defaultFS</name>
  <value>file:///</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri’s scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri’s authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>file:///</value>
  <description>Deprecated. Use (fs.defaultFS) property
  instead</description>
</property>
<property>
  <name>fs.trash.interval</name>
  <value>0</value>
  <description>Number of minutes after which the checkpoint
  gets deleted.  If zero, the trash feature is disabled.
  This option may be configured both on the server and the
  client. If trash is disabled server side then the client
  side configuration is checked. If trash is enabled on the
  server side then the value configured on the server is
  used and the client configuration value is ignored.
  </description>
</property>
<property>
  <name>fs.trash.checkpoint.interval</name>
  <value>0</value>
  <description>Number of minutes between trash checkpoints.
  Should be smaller or equal to fs.trash.interval. If zero,
  the value is set to the value of fs.trash.interval.
  Every time the checkpointer runs it creates a new checkpoint
  out of current and removes checkpoints created more than
  fs.trash.interval minutes ago.
  </description>
</property>
<property>
  <name>fs.AbstractFileSystem.file.impl</name>
  <value>org.apache.hadoop.fs.local.LocalFs</value>
  <description>The AbstractFileSystem for file: uris.</description>
</property>
<property>
  <name>fs.AbstractFileSystem.hdfs.impl</name>
  <value>org.apache.hadoop.fs.Hdfs</value>
  <description>The FileSystem for hdfs: uris.</description>
</property>
<property>
  <name>fs.AbstractFileSystem.viewfs.impl</name>
  <value>org.apache.hadoop.fs.viewfs.ViewFs</value>
  <description>The AbstractFileSystem for view file system for viewfs: uris
  (ie client side mount table:).</description>
</property>
<property>
  <name>fs.ftp.host</name>
  <value>0.0.0.0</value>
  <description>FTP filesystem connects to this server</description>
</property>
<property>
  <name>fs.ftp.host.port</name>
  <value>21</value>
  <description>
    FTP filesystem connects to fs.ftp.host on this port
  </description>
</property>
<property>
  <name>fs.df.interval</name>
  <value>60000</value>
  <description>Disk usage statistics refresh interval in msec.</description>
</property>
<property>
  <name>fs.s3.block.size</name>
  <value>67108864</value>
  <description>Block size to use when writing files to S3.</description>
</property>
<property>
  <name>fs.s3.buffer.dir</name>
  <value>${hadoop.tmp.dir}/s3</value>
  <description>Determines where on the local filesystem the S3 filesystem
  should store files before sending them to S3
  (or after retrieving them from S3).
  </description>
</property>
<property>
  <name>fs.s3.maxRetries</name>
  <value>4</value>
  <description>The maximum number of retries for reading or writing files to S3,
  before we signal failure to the application.
  </description>
</property>
<property>
  <name>fs.s3.sleepTimeSeconds</name>
  <value>10</value>
  <description>The number of seconds to sleep between each S3 retry.
  </description>
</property>
<property>
  <name>fs.automatic.close</name>
  <value>true</value>
  <description>By default, FileSystem instances are automatically closed at program
  exit using a JVM shutdown hook. Setting this property to false disables this
  behavior. This is an advanced option that should only be used by server applications
  requiring a more carefully orchestrated shutdown sequence.
  </description>
</property>
<property>
  <name>fs.s3n.block.size</name>
  <value>67108864</value>
  <description>Block size to use when reading files using the native S3
  filesystem (s3n: URIs).</description>
</property>
<property>
  <name>io.seqfile.compress.blocksize</name>
  <value>1000000</value>
  <description>The minimum block size for compression in block compressed
          SequenceFiles.
  </description>
</property>
<property>
  <name>io.seqfile.lazydecompress</name>
  <value>true</value>
  <description>Should values of block-compressed SequenceFiles be decompressed
          only when necessary.
  </description>
</property>
<property>
  <name>io.seqfile.sorter.recordlimit</name>
  <value>1000000</value>
  <description>The limit on number of records to be kept in memory in a spill
          in SequenceFiles.Sorter
  </description>
</property>
 <property>
  <name>io.mapfile.bloom.size</name>
  <value>1048576</value>
  <description>The size of BloomFilter-s used in BloomMapFile. Each time this many
  keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter).
  Larger values minimize the number of filters, which slightly increases the performance,
  but may waste too much space if the total number of keys is usually much smaller
  than this number.
  </description>
</property>
<property>
  <name>io.mapfile.bloom.error.rate</name>
  <value>0.005</value>
  <description>The rate of false positives in BloomFilter-s used in BloomMapFile.
  As this value decreases, the size of BloomFilter-s increases exponentially. This
  value is the probability of encountering false positives (default is 0.5%).
  </description>
</property>
<property>
  <name>hadoop.util.hash.type</name>
  <value>murmur</value>
  <description>The default implementation of Hash. Currently this can take one of the
  two values: ‘murmur’ to select MurmurHash and ‘jenkins’ to select JenkinsHash.
  </description>
</property>
<!– ipc properties –>
<property>
  <name>ipc.client.idlethreshold</name>
  <value>4000</value>
  <description>Defines the threshold number of connections after which
               connections will be inspected for idleness.
  </description>
</property>
<property>
  <name>ipc.client.kill.max</name>
  <value>10</value>
  <description>Defines the maximum number of clients to disconnect in one go.
  </description>
</property>
<property>
  <name>ipc.client.connection.maxidletime</name>
  <value>10000</value>
  <description>The maximum time in msec after which a client will bring down the
               connection to the server.
  </description>
</property>
<property>
  <name>ipc.client.connect.max.retries</name>
  <value>10</value>
  <description>Indicates the number of retries a client will make to establish
               a server connection.
  </description>
</property>
<property>
  <name>ipc.client.connect.timeout</name>
  <value>20000</value>
  <description>Indicates the number of milliseconds a client will wait for the
               socket to establish a server connection.
  </description>
</property>
<property>
  <name>ipc.client.connect.max.retries.on.timeouts</name>
  <value>45</value>
  <description>Indicates the number of retries a client will make on socket timeout
               to establish a server connection.
  </description>
</property>
<property>
  <name>ipc.server.listen.queue.size</name>
  <value>128</value>
  <description>Indicates the length of the listen queue for servers accepting
               client connections.
  </description>
</property>
<property>
  <name>ipc.server.tcpnodelay</name>
  <value>false</value>
  <description>Turn on/off Nagle’s algorithm for the TCP socket connection on
  the server. Setting to true disables the algorithm and may decrease latency
  with a cost of more/smaller packets.
  </description>
</property>
<property>
  <name>ipc.client.tcpnodelay</name>
  <value>false</value>
  <description>Turn on/off Nagle’s algorithm for the TCP socket connection on
  the client. Setting to true disables the algorithm and may decrease latency
  with a cost of more/smaller packets.
  </description>
</property>
<!– Proxy Configuration –>
<property>
  <name>hadoop.rpc.socket.factory.class.default</name>
  <value>org.apache.hadoop.net.StandardSocketFactory</value>
  <description> Default SocketFactory to use. This parameter is expected to be
    formatted as “package.FactoryClassName”.
  </description>
</property>
<property>
  <name>hadoop.rpc.socket.factory.class.ClientProtocol</name>
  <value></value>
  <description> SocketFactory to use to connect to a DFS. If null or empty, use
    hadoop.rpc.socket.class.default. This socket factory is also used by
    DFSClient to create sockets to DataNodes.
  </description>
</property>
<property>
  <name>hadoop.socks.server</name>
  <value></value>
  <description> Address (host:port) of the SOCKS server to be used by the
    SocksSocketFactory.
  </description>
</property>
<!– Rack Configuration –>
<property>
<name>net.topology.node.switch.mapping.impl</name>
  <value>org.apache.hadoop.net.ScriptBasedMapping</value>
  <description> The default implementation of the DNSToSwitchMapping. It
    invokes a script specified in net.topology.script.file.name to resolve
    node names. If the value for net.topology.script.file.name is not set, the
    default value of DEFAULT_RACK is returned for all node names.
  </description>
</property>
<property>
  <name>net.topology.script.file.name</name>
  <value></value>
  <description> The script name that should be invoked to resolve DNS names to
    NetworkTopology names. Example: the script would take host.foo.bar as an
    argument, and return /rack1 as the output.
  </description>
</property>
<property>
  <name>net.topology.script.number.args</name>
  <value>100</value>
  <description> The max number of args that the script configured with
    net.topology.script.file.name should be run with. Each arg is an
    IP address.
  </description>
</property>
<property>
  <name>net.topology.table.file.name</name>
  <value></value>
  <description> The file name for a topology file, which is used when the
    net.topology.script.file.name property is set to
    org.apache.hadoop.net.TableMapping. The file format is a two column text
    file, with columns separated by whitespace. The first column is a DNS or
    IP address and the second column specifies the rack where the address maps.
    If no entry corresponding to a host in the cluster is found, then
    /default-rack is assumed.
  </description>
</property>
<!– Local file system –>
<property>
  <name>file.stream-buffer-size</name>
  <value>4096</value>
  <description>The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.</description>
</property>
<property>
  <name>file.bytes-per-checksum</name>
  <value>512</value>
  <description>The number of bytes per checksum.  Must not be larger than
  file.stream-buffer-size</description>
</property>
<property>
  <name>file.client-write-packet-size</name>
  <value>65536</value>
  <description>Packet size for clients to write</description>
</property>
<property>
  <name>file.blocksize</name>
  <value>67108864</value>
  <description>Block size</description>
</property>
<property>
  <name>file.replication</name>
  <value>1</value>
  <description>Replication factor</description>
</property>
<!– s3 File System –>
<property>
  <name>s3.stream-buffer-size</name>
  <value>4096</value>
  <description>The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.</description>
</property>
<property>
  <name>s3.bytes-per-checksum</name>
  <value>512</value>
  <description>The number of bytes per checksum.  Must not be larger than
  s3.stream-buffer-size</description>
</property>
<property>
  <name>s3.client-write-packet-size</name>
  <value>65536</value>
  <description>Packet size for clients to write</description>
</property>
<property>
  <name>s3.blocksize</name>
  <value>67108864</value>
  <description>Block size</description>
</property>
<property>
  <name>s3.replication</name>
  <value>3</value>
  <description>Replication factor</description>
</property>
<!– s3native File System –>
<property>
  <name>s3native.stream-buffer-size</name>
  <value>4096</value>
  <description>The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.</description>
</property>
<property>
  <name>s3native.bytes-per-checksum</name>
  <value>512</value>
  <description>The number of bytes per checksum.  Must not be larger than
  s3native.stream-buffer-size</description>
</property>
<property>
  <name>s3native.client-write-packet-size</name>
  <value>65536</value>
  <description>Packet size for clients to write</description>
</property>
<property>
  <name>s3native.blocksize</name>
  <value>67108864</value>
  <description>Block size</description>
</property>
<property>
  <name>s3native.replication</name>
  <value>3</value>
  <description>Replication factor</description>
</property>
<!– Kosmos File System –>
<property>
  <name>kfs.stream-buffer-size</name>
  <value>4096</value>
  <description>The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.</description>
</property>
<property>
  <name>kfs.bytes-per-checksum</name>
  <value>512</value>
  <description>The number of bytes per checksum.  Must not be larger than
  kfs.stream-buffer-size</description>
</property>
<property>
  <name>kfs.client-write-packet-size</name>
  <value>65536</value>
  <description>Packet size for clients to write</description>
</property>
<property>
  <name>kfs.blocksize</name>
  <value>67108864</value>
  <description>Block size</description>
</property>
<property>
  <name>kfs.replication</name>
  <value>3</value>
  <description>Replication factor</description>
</property>
<!– FTP file system –>
<property>
  <name>ftp.stream-buffer-size</name>
  <value>4096</value>
  <description>The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.</description>
</property>
<property>
  <name>ftp.bytes-per-checksum</name>
  <value>512</value>
  <description>The number of bytes per checksum.  Must not be larger than
  ftp.stream-buffer-size</description>
</property>
<property>
  <name>ftp.client-write-packet-size</name>
  <value>65536</value>
  <description>Packet size for clients to write</description>
</property>
<property>
  <name>ftp.blocksize</name>
  <value>67108864</value>
  <description>Block size</description>
</property>
<property>
  <name>ftp.replication</name>
  <value>3</value>
  <description>Replication factor</description>
</property>
<!– Tfile –>
<property>
  <name>tfile.io.chunk.size</name>
  <value>1048576</value>
  <description>
    Value chunk size in bytes. Default  to
    1MB. Values of the length less than the chunk size is
    guaranteed to have known value length in read time (See also
    TFile.Reader.Scanner.Entry.isValueLengthKnown()).
  </description>
</property>
<property>
  <name>tfile.fs.output.buffer.size</name>
  <value>262144</value>
  <description>
    Buffer size used for FSDataOutputStream in bytes.
  </description>
</property>
<property>
  <name>tfile.fs.input.buffer.size</name>
  <value>262144</value>
  <description>
    Buffer size used for FSDataInputStream in bytes.
  </description>
</property>
<!– HTTP web-consoles Authentication –>
<property>
  <name>hadoop.http.authentication.type</name>
  <value>simple</value>
  <description>
    Defines authentication used for Oozie HTTP endpoint.
    Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
  </description>
</property>
<property>
  <name>hadoop.http.authentication.token.validity</name>
  <value>36000</value>
  <description>
    Indicates how long (in seconds) an authentication token is valid before it has
    to be renewed.
  </description>
</property>
<property>
  <name>hadoop.http.authentication.signature.secret.file</name>
  <value>${user.home}/hadoop-http-auth-signature-secret</value>
  <description>
    The signature secret for signing the authentication tokens.
    The same secret should be used for JT/NN/DN/TT configurations.
  </description>
</property>
<property>
  <name>hadoop.http.authentication.cookie.domain</name>
  <value></value>
  <description>
    The domain to use for the HTTP cookie that stores the authentication token.
    In order to authentiation to work correctly across all Hadoop nodes web-consoles
    the domain must be correctly set.
    IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings.
    For this setting to work properly all nodes in the cluster must be configured
    to generate URLs with hostname.domain names on it.
  </description>
</property>
<property>
  <name>hadoop.http.authentication.simple.anonymous.allowed</name>
  <value>true</value>
  <description>
    Indicates if anonymous requests are allowed when using ‘simple’ authentication.
  </description>
</property>
<property>
  <name>hadoop.http.authentication.kerberos.principal</name>
  <value>HTTP/_HOST@LOCALHOST</value>
  <description>
    Indicates the Kerberos principal to be used for HTTP endpoint.
    The principal MUST start with ‘HTTP/’ as per Kerberos HTTP SPNEGO specification.
  </description>
</property>
<property>
  <name>hadoop.http.authentication.kerberos.keytab</name>
  <value>${user.home}/hadoop.keytab</value>
  <description>
    Location of the keytab file with the credentials for the principal.
    Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.
  </description>
</property>
<property>
  <name>dfs.ha.fencing.methods</name>
  <value></value>
  <description>
    List of fencing methods to use for service fencing. May contain
    builtin methods (eg shell and sshfence) or user-defined method.
  </description>
</property>
<property>
  <name>dfs.ha.fencing.ssh.connect-timeout</name>
  <value>30000</value>
  <description>
    SSH connection timeout, in milliseconds, to use with the builtin
    sshfence fencer.
  </description>
</property>
<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value></value>
  <description>
    The SSH private key files to use with the builtin sshfence fencer.
  </description>
</property>
<!– Static Web User Filter properties. –>
<property>
  <description>
    The user name to filter as, on static web filters
    while rendering content. An example use is the HDFS
    web UI (user to be used for browsing files).
  </description>
  <name>hadoop.http.staticuser.user</name>
  <value>dr.who</value>
</property>
<property>
  <name>ha.zookeeper.quorum</name>
  <description>
    A list of ZooKeeper server addresses, separated by commas, that are
    to be used by the ZKFailoverController in automatic failover.
  </description>
</property>
<property>
  <name>ha.zookeeper.session-timeout.ms</name>
  <value>5000</value>
  <description>
    The session timeout to use when the ZKFC connects to ZooKeeper.
    Setting this value to a lower value implies that server crashes
    will be detected more quickly, but risks triggering failover too
    aggressively in the case of a transient error or network blip.
  </description>
</property>
<property>
  <name>ha.zookeeper.parent-znode</name>
  <value>/hadoop-ha</value>
  <description>
    The ZooKeeper znode under which the ZK failover controller stores
    its information. Note that the nameservice ID is automatically
    appended to this znode, so it is not normally necessary to
    configure this, even in a federated environment.
  </description>
</property>
<property>
  <name>ha.zookeeper.acl</name>
  <value>world:anyone:rwcda</value>
  <description>
    A comma-separated list of ZooKeeper ACLs to apply to the znodes
    used by automatic failover. These ACLs are specified in the same
    format as used by the ZooKeeper CLI.
    If the ACL itself contains secrets, you may instead specify a
    path to a file, prefixed with the ‘@’ symbol, and the value of
    this configuration will be loaded from within.
  </description>
</property>
<property>
  <name>ha.zookeeper.auth</name>
  <value></value>
  <description>
    A comma-separated list of ZooKeeper authentications to add when
    connecting to ZooKeeper. These are specified in the same format
    as used by the &quot;addauth&quot; command in the ZK CLI. It is
    important that the authentications specified here are sufficient
    to access znodes with the ACL specified in ha.zookeeper.acl.
    If the auths contain secrets, you may instead specify a
    path to a file, prefixed with the ‘@’ symbol, and the value of
    this configuration will be loaded from within.
  </description>
</property>
<!– SSLFactory configuration –>
<property>
  <name>hadoop.ssl.keystores.factory.class</name>
  <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>
  <description>
    The keystores factory to use for retrieving certificates.
  </description>
</property>
<property>
  <name>hadoop.ssl.require.client.cert</name>
  <value>false</value>
  <description>Whether client certificates are required</description>
</property>
<property>
  <name>hadoop.ssl.hostname.verifier</name>
  <value>DEFAULT</value>
  <description>
    The hostname verifier to provide for HttpsURLConnections.
    Valid values are: DEFAULT, STRICT, STRICT_I6, DEFAULT_AND_LOCALHOST and
    ALLOW_ALL
  </description>
</property>
<property>
  <name>hadoop.ssl.server.conf</name>
  <value>ssl-server.xml</value>
  <description>
    Resource file from which ssl server keystore information will be extracted.
    This file is looked up in the classpath, typically it should be in Hadoop
    conf/ directory.
  </description>
</property>
<property>
  <name>hadoop.ssl.client.conf</name>
  <value>ssl-client.xml</value>
  <description>
    Resource file from which ssl client keystore information will be extracted
    This file is looked up in the classpath, typically it should be in Hadoop
    conf/ directory.
  </description>
</property>
<property>
  <name>hadoop.ssl.enabled</name>
  <value>false</value>
  <description>
    Whether to use SSL for the HTTP endpoints. If set to true, the
    NameNode, DataNode, ResourceManager, NodeManager, HistoryServer and
    MapReduceAppMaster web UIs will be served over HTTPS instead HTTP.
  </description>
</property>
<property>
  <name>hadoop.jetty.logs.serve.aliases</name>
  <value>true</value>
  <description>
    Enable/Disable aliases serving from jetty
  </description>
</property>
<property>
  <name>fs.permissions.umask-mode</name>
  <value>022</value>
  <description>
    The umask used when creating files and directories.
    Can be in octal or in symbolic. Examples are:
    “022” (octal for u=rwx,g=r-x,o=r-x in symbolic),
    or “u=rwx,g=rwx,o=” (symbolic for 007 in octal).
  </description>
</property>
<!– ha properties –>
<property>
  <name>ha.health-monitor.connect-retry-interval.ms</name>
  <value>1000</value>
  <description>
    How often to retry connecting to the service.
  </description>
</property>
<property>
  <name>ha.health-monitor.check-interval.ms</name>
  <value>1000</value>
  <description>
    How often to check the service.
  </description>
</property>
<property>
  <name>ha.health-monitor.sleep-after-disconnect.ms</name>
  <value>1000</value>
  <description>
    How long to sleep after an unexpected RPC error.
  </description>
</property>
<property>
  <name>ha.health-monitor.rpc-timeout.ms</name>
  <value>45000</value>
  <description>
    Timeout for the actual monitorHealth() calls.
  </description>
</property>
<property>
  <name>ha.failover-controller.new-active.rpc-timeout.ms</name>
  <value>60000</value>
  <description>
    Timeout that the FC waits for the new active to become active
  </description>
</property>
<property>
  <name>ha.failover-controller.graceful-fence.rpc-timeout.ms</name>
  <value>5000</value>
  <description>
    Timeout that the FC waits for the old active to go to standby
  </description>
</property>
<property>
  <name>ha.failover-controller.graceful-fence.connection.retries</name>
  <value>1</value>
  <description>
    FC connection retries for graceful fencing
  </description>
</property>
<property>
  <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
  <value>20000</value>
  <description>
    Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState
  </description>
</property>
</configuration>
DEPRECATED VALUES AS OF JULY 16th 2013
======================================
Deprecated Properties
The following table lists the configuration property names that are deprecated in this version of Hadoop, and their replacements.
Deprecated property name New property name
create.empty.dir.if.nonexist mapreduce.jobcontrol.createdir.ifnotexist
dfs.access.time.precision dfs.namenode.accesstime.precision
dfs.backup.address dfs.namenode.backup.address
dfs.backup.http.address dfs.namenode.backup.http-address
dfs.balance.bandwidthPerSec dfs.datanode.balance.bandwidthPerSec
dfs.block.size dfs.blocksize
dfs.data.dir dfs.datanode.data.dir
dfs.datanode.max.xcievers dfs.datanode.max.transfer.threads
dfs.df.interval fs.df.interval
dfs.federation.nameservice.id dfs.nameservice.id
dfs.federation.nameservices dfs.nameservices
dfs.http.address dfs.namenode.http-address
dfs.https.address dfs.namenode.https-address
dfs.https.client.keystore.resource dfs.client.https.keystore.resource
dfs.https.need.client.auth dfs.client.https.need-auth
dfs.max.objects dfs.namenode.max.objects
dfs.max-repl-streams dfs.namenode.replication.max-streams
dfs.name.dir dfs.namenode.name.dir
dfs.name.dir.restore dfs.namenode.name.dir.restore
dfs.name.edits.dir dfs.namenode.edits.dir
dfs.permissions dfs.permissions.enabled
dfs.permissions.supergroup dfs.permissions.superusergroup
dfs.read.prefetch.size dfs.client.read.prefetch.size
dfs.replication.considerLoad dfs.namenode.replication.considerLoad
dfs.replication.interval dfs.namenode.replication.interval
dfs.replication.min dfs.namenode.replication.min
dfs.replication.pending.timeout.sec dfs.namenode.replication.pending.timeout-sec
dfs.safemode.extension dfs.namenode.safemode.extension
dfs.safemode.threshold.pct dfs.namenode.safemode.threshold-pct
dfs.secondary.http.address dfs.namenode.secondary.http-address
dfs.socket.timeout dfs.client.socket-timeout
dfs.umaskmode fs.permissions.umask-mode
dfs.write.packet.size dfs.client-write-packet-size
fs.checkpoint.dir dfs.namenode.checkpoint.dir
fs.checkpoint.edits.dir dfs.namenode.checkpoint.edits.dir
fs.checkpoint.period dfs.namenode.checkpoint.period
fs.default.name fs.defaultFS
hadoop.configured.node.mapping net.topology.configured.node.mapping
hadoop.job.history.location mapreduce.jobtracker.jobhistory.location
hadoop.native.lib io.native.lib.available
hadoop.net.static.resolutions mapreduce.tasktracker.net.static.resolutions
hadoop.pipes.command-file.keep mapreduce.pipes.commandfile.preserve
hadoop.pipes.executable.interpretor mapreduce.pipes.executable.interpretor
hadoop.pipes.executable mapreduce.pipes.executable
hadoop.pipes.java.mapper mapreduce.pipes.isjavamapper
hadoop.pipes.java.recordreader mapreduce.pipes.isjavarecordreader
hadoop.pipes.java.recordwriter mapreduce.pipes.isjavarecordwriter
hadoop.pipes.java.reducer mapreduce.pipes.isjavareducer
hadoop.pipes.partitioner mapreduce.pipes.partitioner
heartbeat.recheck.interval dfs.namenode.heartbeat.recheck-interval
io.bytes.per.checksum dfs.bytes-per-checksum
io.sort.factor mapreduce.task.io.sort.factor
io.sort.mb mapreduce.task.io.sort.mb
io.sort.spill.percent mapreduce.map.sort.spill.percent
jobclient.completion.poll.interval mapreduce.client.completion.pollinterval
jobclient.output.filter mapreduce.client.output.filter
jobclient.progress.monitor.poll.interval mapreduce.client.progressmonitor.pollinterval
job.end.notification.url mapreduce.job.end-notification.url
job.end.retry.attempts mapreduce.job.end-notification.retry.attempts
job.end.retry.interval mapreduce.job.end-notification.retry.interval
job.local.dir mapreduce.job.local.dir
keep.failed.task.files mapreduce.task.files.preserve.failedtasks
keep.task.files.pattern mapreduce.task.files.preserve.filepattern
key.value.separator.in.input.line mapreduce.input.keyvaluelinerecordreader.key.value.separator
local.cache.size mapreduce.tasktracker.cache.local.size
map.input.file mapreduce.map.input.file
map.input.length mapreduce.map.input.length
map.input.start mapreduce.map.input.start
map.output.key.field.separator mapreduce.map.output.key.field.separator
map.output.key.value.fields.spec mapreduce.fieldsel.map.output.key.value.fields.spec
mapred.acls.enabled mapreduce.cluster.acls.enabled
mapred.binary.partitioner.left.offset mapreduce.partition.binarypartitioner.left.offset
mapred.binary.partitioner.right.offset mapreduce.partition.binarypartitioner.right.offset
mapred.cache.archives mapreduce.job.cache.archives
mapred.cache.archives.timestamps mapreduce.job.cache.archives.timestamps
mapred.cache.files mapreduce.job.cache.files
mapred.cache.files.timestamps mapreduce.job.cache.files.timestamps
mapred.cache.localArchives mapreduce.job.cache.local.archives
mapred.cache.localFiles mapreduce.job.cache.local.files
mapred.child.tmp mapreduce.task.tmp.dir
mapred.cluster.average.blacklist.threshold mapreduce.jobtracker.blacklist.average.threshold
mapred.cluster.map.memory.mb mapreduce.cluster.mapmemory.mb
mapred.cluster.max.map.memory.mb mapreduce.jobtracker.maxmapmemory.mb
mapred.cluster.max.reduce.memory.mb mapreduce.jobtracker.maxreducememory.mb
mapred.cluster.reduce.memory.mb mapreduce.cluster.reducememory.mb
mapred.committer.job.setup.cleanup.needed mapreduce.job.committer.setup.cleanup.needed
mapred.compress.map.output mapreduce.map.output.compress
mapred.data.field.separator mapreduce.fieldsel.data.field.separator
mapred.debug.out.lines mapreduce.task.debugout.lines
mapred.healthChecker.interval mapreduce.tasktracker.healthchecker.interval
mapred.healthChecker.script.args mapreduce.tasktracker.healthchecker.script.args
mapred.healthChecker.script.path mapreduce.tasktracker.healthchecker.script.path
mapred.healthChecker.script.timeout mapreduce.tasktracker.healthchecker.script.timeout
mapred.heartbeats.in.second mapreduce.jobtracker.heartbeats.in.second
mapred.hosts.exclude mapreduce.jobtracker.hosts.exclude.filename
mapred.hosts mapreduce.jobtracker.hosts.filename
mapred.inmem.merge.threshold mapreduce.reduce.merge.inmem.threshold
mapred.input.dir.formats mapreduce.input.multipleinputs.dir.formats
mapred.input.dir.mappers mapreduce.input.multipleinputs.dir.mappers
mapred.input.dir mapreduce.input.fileinputformat.inputdir
mapred.input.pathFilter.class mapreduce.input.pathFilter.class
mapred.jar mapreduce.job.jar
mapred.job.classpath.archives mapreduce.job.classpath.archives
mapred.job.classpath.files mapreduce.job.classpath.files
mapred.job.id mapreduce.job.id
mapred.jobinit.threads mapreduce.jobtracker.jobinit.threads
mapred.job.map.memory.mb mapreduce.map.memory.mb
mapred.job.name mapreduce.job.name
mapred.job.priority mapreduce.job.priority
mapred.job.queue.name mapreduce.job.queuename
mapred.job.reduce.input.buffer.percent mapreduce.reduce.input.buffer.percent
mapred.job.reduce.markreset.buffer.percent mapreduce.reduce.markreset.buffer.percent
mapred.job.reduce.memory.mb mapreduce.reduce.memory.mb
mapred.job.reduce.total.mem.bytes mapreduce.reduce.memory.totalbytes
mapred.job.reuse.jvm.num.tasks mapreduce.job.jvm.numtasks
mapred.job.shuffle.input.buffer.percent mapreduce.reduce.shuffle.input.buffer.percent
mapred.job.shuffle.merge.percent mapreduce.reduce.shuffle.merge.percent
mapred.job.tracker.handler.count mapreduce.jobtracker.handler.count
mapred.job.tracker.history.completed.location mapreduce.jobtracker.jobhistory.completed.location
mapred.job.tracker.http.address mapreduce.jobtracker.http.address
mapred.jobtracker.instrumentation mapreduce.jobtracker.instrumentation
mapred.jobtracker.job.history.block.size mapreduce.jobtracker.jobhistory.block.size
mapred.job.tracker.jobhistory.lru.cache.size mapreduce.jobtracker.jobhistory.lru.cache.size
mapred.job.tracker mapreduce.jobtracker.address
mapred.jobtracker.maxtasks.per.job mapreduce.jobtracker.maxtasks.perjob
mapred.job.tracker.persist.jobstatus.active mapreduce.jobtracker.persist.jobstatus.active
mapred.job.tracker.persist.jobstatus.dir mapreduce.jobtracker.persist.jobstatus.dir
mapred.job.tracker.persist.jobstatus.hours mapreduce.jobtracker.persist.jobstatus.hours
mapred.jobtracker.restart.recover mapreduce.jobtracker.restart.recover
mapred.job.tracker.retiredjobs.cache.size mapreduce.jobtracker.retiredjobs.cache.size
mapred.job.tracker.retire.jobs mapreduce.jobtracker.retirejobs
mapred.jobtracker.taskalloc.capacitypad mapreduce.jobtracker.taskscheduler.taskalloc.capacitypad
mapred.jobtracker.taskScheduler mapreduce.jobtracker.taskscheduler
mapred.jobtracker.taskScheduler.maxRunningTasksPerJob mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob
mapred.join.expr mapreduce.join.expr
mapred.join.keycomparator mapreduce.join.keycomparator
mapred.lazy.output.format mapreduce.output.lazyoutputformat.outputformat
mapred.line.input.format.linespermap mapreduce.input.lineinputformat.linespermap
mapred.linerecordreader.maxlength mapreduce.input.linerecordreader.line.maxlength
mapred.local.dir mapreduce.cluster.local.dir
mapred.local.dir.minspacekill mapreduce.tasktracker.local.dir.minspacekill
mapred.local.dir.minspacestart mapreduce.tasktracker.local.dir.minspacestart
mapred.map.child.env mapreduce.map.env
mapred.map.child.java.opts mapreduce.map.java.opts
mapred.map.child.log.level mapreduce.map.log.level
mapred.map.max.attempts mapreduce.map.maxattempts
mapred.map.output.compression.codec mapreduce.map.output.compress.codec
mapred.mapoutput.key.class mapreduce.map.output.key.class
mapred.mapoutput.value.class mapreduce.map.output.value.class
mapred.mapper.regex.group mapreduce.mapper.regexmapper..group
mapred.mapper.regex mapreduce.mapper.regex
mapred.map.task.debug.script mapreduce.map.debug.script
mapred.map.tasks mapreduce.job.maps
mapred.map.tasks.speculative.execution mapreduce.map.speculative
mapred.max.map.failures.percent mapreduce.map.failures.maxpercent
mapred.max.reduce.failures.percent mapreduce.reduce.failures.maxpercent
mapred.max.split.size mapreduce.input.fileinputformat.split.maxsize
mapred.max.tracker.blacklists mapreduce.jobtracker.tasktracker.maxblacklists
mapred.max.tracker.failures mapreduce.job.maxtaskfailures.per.tracker
mapred.merge.recordsBeforeProgress mapreduce.task.merge.progress.records
mapred.min.split.size mapreduce.input.fileinputformat.split.minsize
mapred.min.split.size.per.node mapreduce.input.fileinputformat.split.minsize.per.node
mapred.min.split.size.per.rack mapreduce.input.fileinputformat.split.minsize.per.rack
mapred.output.compression.codec mapreduce.output.fileoutputformat.compress.codec
mapred.output.compression.type mapreduce.output.fileoutputformat.compress.type
mapred.output.compress mapreduce.output.fileoutputformat.compress
mapred.output.dir mapreduce.output.fileoutputformat.outputdir
mapred.output.key.class mapreduce.job.output.key.class
mapred.output.key.comparator.class mapreduce.job.output.key.comparator.class
mapred.output.value.class mapreduce.job.output.value.class
mapred.output.value.groupfn.class mapreduce.job.output.group.comparator.class
mapred.permissions.supergroup mapreduce.cluster.permissions.supergroup
mapred.pipes.user.inputformat mapreduce.pipes.inputformat
mapred.reduce.child.env mapreduce.reduce.env
mapred.reduce.child.java.opts mapreduce.reduce.java.opts
mapred.reduce.child.log.level mapreduce.reduce.log.level
mapred.reduce.max.attempts mapreduce.reduce.maxattempts
mapred.reduce.parallel.copies mapreduce.reduce.shuffle.parallelcopies
mapred.reduce.slowstart.completed.maps mapreduce.job.reduce.slowstart.completedmaps
mapred.reduce.task.debug.script mapreduce.reduce.debug.script
mapred.reduce.tasks mapreduce.job.reduces
mapred.reduce.tasks.speculative.execution mapreduce.reduce.speculative
mapred.seqbinary.output.key.class mapreduce.output.seqbinaryoutputformat.key.class
mapred.seqbinary.output.value.class mapreduce.output.seqbinaryoutputformat.value.class
mapred.shuffle.connect.timeout mapreduce.reduce.shuffle.connect.timeout
mapred.shuffle.read.timeout mapreduce.reduce.shuffle.read.timeout
mapred.skip.attempts.to.start.skipping mapreduce.task.skip.start.attempts
mapred.skip.map.auto.incr.proc.count mapreduce.map.skip.proc-count.auto-incr
mapred.skip.map.max.skip.records mapreduce.map.skip.maxrecords
mapred.skip.on mapreduce.job.skiprecords
mapred.skip.out.dir mapreduce.job.skip.outdir
mapred.skip.reduce.auto.incr.proc.count mapreduce.reduce.skip.proc-count.auto-incr
mapred.skip.reduce.max.skip.groups mapreduce.reduce.skip.maxgroups
mapred.speculative.execution.slowNodeThreshold mapreduce.job.speculative.slownodethreshold
mapred.speculative.execution.slowTaskThreshold mapreduce.job.speculative.slowtaskthreshold
mapred.speculative.execution.speculativeCap mapreduce.job.speculative.speculativecap
mapred.submit.replication mapreduce.client.submit.file.replication
mapred.system.dir mapreduce.jobtracker.system.dir
mapred.task.cache.levels mapreduce.jobtracker.taskcache.levels
mapred.task.id mapreduce.task.attempt.id
mapred.task.is.map mapreduce.task.ismap
mapred.task.partition mapreduce.task.partition
mapred.task.profile mapreduce.task.profile
mapred.task.profile.maps mapreduce.task.profile.maps
mapred.task.profile.params mapreduce.task.profile.params
mapred.task.profile.reduces mapreduce.task.profile.reduces
mapred.task.timeout mapreduce.task.timeout
mapred.tasktracker.dns.interface mapreduce.tasktracker.dns.interface
mapred.tasktracker.dns.nameserver mapreduce.tasktracker.dns.nameserver
mapred.tasktracker.events.batchsize mapreduce.tasktracker.events.batchsize
mapred.tasktracker.expiry.interval mapreduce.jobtracker.expire.trackers.interval
mapred.task.tracker.http.address mapreduce.tasktracker.http.address
mapred.tasktracker.indexcache.mb mapreduce.tasktracker.indexcache.mb
mapred.tasktracker.instrumentation mapreduce.tasktracker.instrumentation
mapred.tasktracker.map.tasks.maximum mapreduce.tasktracker.map.tasks.maximum
mapred.tasktracker.memory_calculator_plugin mapreduce.tasktracker.resourcecalculatorplugin
mapred.tasktracker.memorycalculatorplugin mapreduce.tasktracker.resourcecalculatorplugin
mapred.tasktracker.reduce.tasks.maximum mapreduce.tasktracker.reduce.tasks.maximum
mapred.task.tracker.report.address mapreduce.tasktracker.report.address
mapred.task.tracker.task-controller mapreduce.tasktracker.taskcontroller
mapred.tasktracker.taskmemorymanager.monitoring-interval mapreduce.tasktracker.taskmemorymanager.monitoringinterval
mapred.tasktracker.tasks.sleeptime-before-sigkill mapreduce.tasktracker.tasks.sleeptimebeforesigkill
mapred.temp.dir mapreduce.cluster.temp.dir
mapred.text.key.comparator.options mapreduce.partition.keycomparator.options
mapred.text.key.partitioner.options mapreduce.partition.keypartitioner.options
mapred.textoutputformat.separator mapreduce.output.textoutputformat.separator
mapred.tip.id mapreduce.task.id
mapreduce.combine.class mapreduce.job.combine.class
mapreduce.inputformat.class mapreduce.job.inputformat.class
mapreduce.job.counters.limit mapreduce.job.counters.max
mapreduce.jobtracker.permissions.supergroup mapreduce.cluster.permissions.supergroup
mapreduce.map.class mapreduce.job.map.class
mapreduce.outputformat.class mapreduce.job.outputformat.class
mapreduce.partitioner.class mapreduce.job.partitioner.class
mapreduce.reduce.class mapreduce.job.reduce.class
mapred.used.genericoptionsparser mapreduce.client.genericoptionsparser.used
mapred.userlog.limit.kb mapreduce.task.userlog.limit.kb
mapred.userlog.retain.hours mapreduce.job.userlog.retain.hours
mapred.working.dir mapreduce.job.working.dir
mapred.work.output.dir mapreduce.task.output.dir
min.num.spills.for.combine mapreduce.map.combine.minspills
reduce.output.key.value.fields.spec mapreduce.fieldsel.reduce.output.key.value.fields.spec
security.job.submission.protocol.acl security.job.client.protocol.acl
security.task.umbilical.protocol.acl security.job.task.protocol.acl
sequencefile.filter.class mapreduce.input.sequencefileinputfilter.class
sequencefile.filter.frequency mapreduce.input.sequencefileinputfilter.frequency
sequencefile.filter.regex mapreduce.input.sequencefileinputfilter.regex
session.id dfs.metrics.session-id
slave.host.name dfs.datanode.hostname
slave.host.name mapreduce.tasktracker.host.name
tasktracker.contention.tracking mapreduce.tasktracker.contention.tracking
tasktracker.http.threads mapreduce.tasktracker.http.threads
topology.node.switch.mapping.impl net.topology.node.switch.mapping.impl
topology.script.file.name net.topology.script.file.name
topology.script.number.args net.topology.script.number.args
user.name mapreduce.job.user.name
webinterface.private.actions mapreduce.jobtracker.webinterface.trusted
The following table lists additional changes to some configuration properties:
Deprecated property name New property name
mapred.create.symlink NONE – symlinking is always on
mapreduce.job.cache.symlink.create NONE – symlinking is always on

Leave a Reply

Your email address will not be published. Required fields are marked *