Wednesday 16 April 2014

READ.txt

Apache-MultiNode-Insatallation-Shellscript
==========================================
Apache Hadoop MultiNode Insatallation Shellscript
Yarn version MultiNode Installation Shell script
Pleace download this two script of Apache_MultiNode.sh and Slave_Install.sh.
Just do before start run the script run this commands in namenode bellow:
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub $USER@tony.com
ssh-copy-id -i ~/.ssh/id_rsa.pub $USER@tony1.com
ssh-copy-id -i ~/.ssh/id_rsa.pub $USER@tony2.com
And type Bellow command in your ubuntu michine:
bash Multinode_Install.sh
Then it will ask NameNode ip address and hostname of NameNode give to example of bellow given type:
192.168.0.1 tony.com
Then it will ask number of DataNode You want to install in Your cluster give to example of bellow given type:
3
Then it will ask DataNode ip address and hostname of DataNodes give to example of bellow given type:
Note if you want to use NameNode as a DataNode give NameNode ipaddress and hostname include DataNode ipaddress and hostname:
192.168.0.1 tony.com
192.168.0.2 tony1.com
192.168.0.3 tony2.com
Set the environment path and hadoop home in /etc/environment
In path : /usr/local/had/hadoop/bin:/usr/local/hadoop/sbin
HADOOP_HOME="/usr/local/had/hadoop"
If hav any problem send mail to tony.ntw@gmail.com

Apache_MultiNode.sh

#!/bin/bash
echo "Enter MasterNode ipaddress space hostname::"$cond
read cond
echo $cond > master
echo "Enter Datanode cluster number::"$cond1
read cond1
echo "Enter Sudo User Password::"$cond3
read cond3
if [ -z "$cond3" ]
then
echo Your password file configuration successfully skiped..................
else
echo "$cond3" > pas.txt
echo Your password file configuration successfully finced..................
fi
for ((i=1; i<=$cond1; i++));
do
echo "Enter Datanode ipaddress space hostname::"$cond2
read cond2
echo $cond2 >> datanode
done
cat master > host
cat datanode >> host
awk '!x[$0]++' host > hos
mv hos host
cat pas.txt | sudo -S cp -r host /etc/hosts
echo cp
wget http://archive.apache.org/dist/hadoop/common/stable/hadoop-2.2.0.tar.gz
a=$( cat 1 )
for ((i=1; i<=$cond1; i++));
do
cut -d' ' -f1 datanode > slv
sl=$( sed -n "$i"p slv )
echo $sl
cut -d' ' -f2 datanode > slave
echo "$cond3" > pas.txt
echo $cond > master
scp -r host $sl:~
scp -r pas.txt $sl:~
scp -r master $sl:~
scp -r slave $sl:~
scp -r Slave_isnt.sh $sl:~
scp -r hadoop-2.2.0.tar.gz $sl:~
ssh $sl sh Slave_isnt.sh
done
bash Slave_isnt.sh