- 1、 安装jdk1.6.0_26(安装java环境)
- Scp /etc/profile (注释)newnodeip:/etc
- 2、 安装hadoop及hbase、zookeeper
- Yum install hadoop-0.20
- Yum install hadoop-0.20-tasktarek
- Yum install hadoop-0.20-namenode
- Yum install hadoop-0.20-datanode
- Yum install hadoop-hbase
- 3|、从主节点拷贝配置文件到新加节点
- Scp –r /etc/hadoop-0.20/conf newnodeip:/etc/hadoop-0.20
- Scp –r /etc/hbase/conf newnodeip:/etc/hbase
- Scp –r /etc/zookeeper/conf newnodeip:/etc/zookeeper
- 3、 设置用户最大打开的文件数量:
- Vim /etc/security/limits.conf
- hdfs - nofile 32768
- hbase - nofile 32768
- 4、 在master主机上添加dns解析
- Vim /var/named/hdfs.zone
- $TTL 86400
- @ IN SOA hdfs. root(
- 200101111
- 14400
- 3600
- 604800
- 86400)
- master-hadoop IN A 192.168.5.249
- slave1-hadoop IN A 192.168.5.201
- hostname IN A newip(注释)
- master-hbase IN A 192.168.5.249
- slave1-hbase IN A 192.168.5.201
- hostname IN A newip(注释)
- @ IN NS ns.hdfs.
- /etc/rc.d/init.d/named restart
- 6,在本机设置dns
- vi /etc/resolv.conf
- earch hdfs
- domain hdfs
- nameserver 192.168.5.249
- nameserver 202.106.0.20
- ~
- 7、主节点master主机添加从节点配置:
- Vim /etc/hadoop-0.20/slaves
- slave1-hadoop.hdfs
- slave2-hadoop.hdfs
- slave4-hadoop.hdfs
- slave3-hadoop.hdfs
- hostname(注释)
- vim /etc/hbase/regionserver
- slave1-hadoop.hdfs
- slave2-hadoop.hdfs
- slave4-hadoop.hdfs
- slave3-hadoop.hdfs
- hostname(注释)
- 8、配置免密码验证
- Scp /home/hdfs/.ssh/* newip:/home/hdfs/.ssh
- 9、启动新加节点:
- /usr/hahadoop-0.20/bin/daemon start datanode
- /usr/hahadoop-0.20/bin/daemon start tasktracker
Hadoop分布式集群系统添加节点
原创
©著作权归作者所有:来自51CTO博客作者malihappy2009的原创作品,请联系作者获取转载授权,否则将追究法律责任
下一篇:hive安装工作记录
提问和评论都可以,用心的回复会被更多人看到
评论
发布评论
相关文章
-
一步一步教你搭建Hadoop分布式集群
旨在说明如何部署hadoop伪分布式集群和完全分布式集群
hadoop vim Hadoop -
hadoop分布式集群
hadoop的基础知识我就不在这里介绍了,任何有关hadoop书籍中都有非常详
hadoop分布式集群 Hadoop hadoop oracle java -
搭建Hadoop分布式集群------测试Hadoop分布式集群环境
验证hadoop集群构建成功Step_1:通过Master节点格式化集群的文件系统:Step_2:启动hadoop集群:Step_3:停止Hadoop集群:此
启动hadoop服务 hadoop dfs hadoop集群运行wordcount 处理hadoop没有数据节点问题