hadoop fs

hadoop fs -cmd 

cmd: 具体的操作,基本上与UNIX的命令行相同 

hadoop fs -mkdir /user/trunk (创建目录) 

hadoop fs -ls /user (显示目录文件) 

hadoop fs -lsr /user (递归的) 

hadoop fs -put test.txt /user/trunk (复制文件到/user/trunk 目录下) 

hadoop fs -put test.txt . (复制到hdfs当前目录下,首先要创建当前目录) 

hadoop fs -put test.txt /home/output . (上传到hdfs) 

hadoop fs -get /user/trunk/test.txt . (复制到本地当前目录下) 

hadoop fs -cat /user/trunk/test.txt (查看文件) 

hadoop fs -touchz /user/new.txt(在hadoop指定目录下新建一个空文件) 

hadoop fs -tail /user/trunk/test.txt (查看最后1000字节) 

hadoop fs –rm /user/trunk/test.txt (删除文件) 

hadoop fs –rmr /user/trunk/test.txt (删除目录) 

hadoop fs –cp /user/a.txt /user/b.txt(拷贝文件) 

hadoop fs –mv /user/test.txt /user/ok.txt 将hadoop上某个文件重命名 

hadoop dfs –getmerge /user /home/t 将hadoop指定目录下所有内容保存为一个文件,同时down至本地 

hadoop fs -help ls (查看ls命令的帮助文档) 

hadoop job –kill [job-id] 将正在运行的hadoop作业kill掉 

hadoop Admin 

hadoop dfsadmin -safemode leave 切换安全模式 

hadoop dfsadmin –report 显示Datanode列表