一、环境准备
准备三台服务器,分别编辑 vi /etc/hosts
192.168.37.242 node1192.168.37.242 node2192.168.37.242 node3
二、编码master(node1)配置
core-site.xml
hadoop.tmp.dir file:/home/hadoop/tmp Abase for other temporary directories. fs.defaultFS hdfs://node1:9000 io.file.buffer.size 131702
hdfs-site.xml
dfs.replication 3 dfs.namenode.name.dir file:/home/hadoop/tmp/dfs/name dfs.datanode.data.dir file:/home/hadoop/tmp/dfs/data dfs.namenode.secondary.http-address node1:9001 dfs.webhdfs.enabled true
mv mapred-site.xml.template mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address node1:10020 mapreduce.jobhistory.webapp.address node1:19888
yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.auxservices.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address node1:8032 yarn.resourcemanager.scheduler.address node1:8030 yarn.resourcemanager.resource-tracker.address node1:8031 yarn.resourcemanager.admin.address node1:8033 yarn.resourcemanager.webapp.address node1:8088 yarn.nodemanager.resource.memory-mb 768
修改etc/hadoop/slaves
#localhostnode2node3
三、将node1安装文件复制到node2、node3
scp -r hadoop-2.7.0/ node2:/home/hadoop/scp -r hadoop-2.7.0/ node3:/home/hadoop/
四、启动集群
1、主服务器(node1)上执行bin/hdfs namenode -format 进行初始化
2、sbin目录下执行 ./start-all.sh
3、可以使用jps查看信息
4、web查看 http://192.168.37.242:8088/
5、停止的话,输入命令,sbin/stop-all.sh