一、Hadoop的启动和测试

首先在node上进入hadoop安装目录 cd /home/xu/hadoop-2.6.5/

 ·格式化文件系统:新买的硬盘格式化为NTFS、FAT32等文件系统,hadoop则在namenode上格式化为hdfs文件系统。

执行 bin/hdfs namenode -format

 

linux 查看hadoop是否启动成功 查看hadoop是否开启_hadoop

sbin/start-dfs.sh

在各主机下用jps命令查看到 node启动了NameNode、node1启动了DataNode和secondaryNameNode、node2启动了DataNode;

然后键入 netstat -nao|grep 50070;curl http://node:50070 ,在windows浏览器上输入http://10.0.0.13:50070试试!

 ·启动Yarn sbin/start-yarn.sh http://node:8088

sbin/mr-jobhistory-daemon.sh ,可访问http://node:19888

linux 查看hadoop是否启动成功 查看hadoop是否开启_hdfs_02

linux 查看hadoop是否启动成功 查看hadoop是否开启_hdfs_03

#!/bin/sh

select ch in "remove" "format" "start" "stop"
do
    case $ch in
    "remove")
        path=/home/xu/hadoop-2.6.5/&&
        for ip in "node" "node1" "node2";
        do
                ssh root@$ip "cd $path&&rm -rf hdfs&&rm -rf tmp&&rm -rf logs"
                echo ">>In $ip,the dirs of tmp、hdfs and logs are removed";
        done
        break;
        ;;
    "format")
        cd /home/xu/hadoop-2.6.5/bin&&
        hdfs namenode -format&&
        echo ">>hdfs namenode -format is OK!";
        break;
        ;;
    "start")
        echo ">>source ect.profile >>hdfs、start-dfs.sh in node:50070"&&
        source /etc/profile&&source /root/.bashrc&&
        cd /home/xu/hadoop-2.6.5/sbin/&&
        start-dfs.sh&&
        echo ">>yarn、start-yarn.sh in node:8088"&&
        start-yarn.sh&&
        echo ">>start jobhistory in node:19888"&&
        mr-jobhistory-daemon.sh start historyserver&&
        for ip in "node" "node1" "node2";
        do
                ssh root@$ip "jps&&chmod -R 777 /home/xu/hadoop-2.6.5"&&
                echo ">>In $ip  jps"
        done
        hadoop fs -chmod -R 777 /&&
        break;
        ;;
    "stop")
        cd /home/xu/hadoop-2.6.5/sbin/&&
        stop-dfs.sh&&stop-yarn.sh&&mr-jobhistory-daemon.sh stop historyserver&&
        echo ">>stop hdfs、yarn、JobHistory Server are OK!";
        break;
        ;;
    *)
        echo "ignorant"
        ;;
        esac
done;

编一个shell脚本

linux 查看hadoop是否启动成功 查看hadoop是否开启_hdfs_04

 

 ·测试hadoop自带的wordcount例子(hadoop官网解析)  注意设置权限: chomd -R 777 /home/xu;hadoop fs -chmod -R 777

[root@node ~]# hadoop fs -mkdir -p {/test/WordCount,/result}
[root@node ~]# hadoop fs -put /home/xu/hadoop-2.6.5/*.txt /test/WordCount
[root@node ~]# hadoop jar /home/xu/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /test/WordCount /result/WordCount
[root@node ~]# hadoop fs -cat /result/part-r-00000

二、动态增删datanode(略)

三、HDFS Shell命令

 ·作为一种文件系统,必然与windows linux一样有着基本的命令行文件系统操作且能以各种编程语言接口实现。

 

linux 查看hadoop是否启动成功 查看hadoop是否开启_hadoop_05

官网Hadoop Shell命令HDFS的JAVA API.这东西得多练才行!