在搭建环境的过程中我们用过:  以   hadoop fs   表示file system 开头

 

查询所有的hdfs shell命令

[root@localhost current]# hadoop fs
Usage: hadoop fs [generic options]
        [-appendToFile <localsrc> ... <dst>]
        [-cat [-ignoreCrc] <src> ...]
        [-checksum <src> ...]
        [-chgrp [-R] GROUP PATH...]
        [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-copyFromLocal [-f] [-p] <localsrc> ... <dst>]
        [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-count [-q] <path> ...]
        [-cp [-f] [-p] <src> ... <dst>]
        [-createSnapshot <snapshotDir> [<snapshotName>]]
        [-deleteSnapshot <snapshotDir> <snapshotName>]
        [-df [-h] [<path> ...]]
        [-du [-s] [-h] <path> ...]
        [-expunge]
        [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-getfacl [-R] <path>]
        [-getmerge [-nl] <src> <localdst>]
        [-help [cmd ...]]
        [-ls [-d] [-h] [-R] [<path> ...]]
        [-mkdir [-p] <path> ...]
        [-moveFromLocal <localsrc> ... <dst>]
        [-moveToLocal <src> <localdst>]
        [-mv <src> ... <dst>]
        [-put [-f] [-p] <localsrc> ... <dst>]
        [-renameSnapshot <snapshotDir> <oldName> <newName>]
        [-rm [-f] [-r|-R] [-skipTrash] <src> ...]
        [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
        [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
        [-setrep [-R] [-w] <rep> <path> ...]
        [-stat [format] <path> ...]
        [-tail [-f] <file>]
        [-test -[defsz] <path>]
        [-text [-ignoreCrc] <src> ...]
        [-touchz <path> ...]
        [-usage [cmd ...]]

Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|jobtracker:port>    specify a job tracker
-files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]

 

  注意:hdfs对于文件不允许修改,但是可以对内容进行追加,修改的时候需要将前面分好的块进行重新排列,所以不支持,但是追加可以实现,相当于重新加块。

 

注意: 对于hadoop命令,/相当于hdfs的根目录,不是linux的根目录。当然也可以全写  hdfs://localhost:9000/

 

例如:列出文件夹结构:  -ls

^C[root@localhost current]# hadoop fs -ls /
Found 4 items
-rw-r--r--   1 root supergroup      37667 2018-04-11 03:29 /install.log
drwx------   - root supergroup          0 2018-04-11 03:54 /tmp
drwxr-xr-x   - root supergroup          0 2018-04-11 03:54 /user
drwxr-xr-x   - root supergroup          0 2018-04-11 04:10 /wordcount

 

  类似于linux的ls,可以看出文件类型与权限。1代表在集群中的备份数量。

 

修改文件所属主:  -chown 

[root@localhost current]# hadoop fs -chown hadoop /install.log
[root@localhost current]# hadoop fs -ls /
Found 4 items
-rw-r--r--   1 hadoop supergroup      37667 2018-04-11 03:29 /install.log
drwx------   - root   supergroup          0 2018-04-11 03:54 /tmp
drwxr-xr-x   - root   supergroup          0 2018-04-11 03:54 /user
drwxr-xr-x   - root   supergroup          0 2018-04-11 04:10 /wordcount

 

 

修改权限:  -chmod

[root@localhost current]# hadoop fs -chmod 777 /install.log
[root@localhost current]# hadoop fs -ls /
Found 4 items
-rwxrwxrwx   1 hadoop supergroup      37667 2018-04-11 03:29 /install.log
drwx------   - root   supergroup          0 2018-04-11 03:54 /tmp
drwxr-xr-x   - root   supergroup          0 2018-04-11 03:54 /user
drwxr-xr-x   - root   supergroup          0 2018-04-11 04:10 /wordcount

 

 

文件上传: -copyFromLocal (相当于  -put )

[root@localhost ~]# hadoop fs -copyFromLocal ./anaconda-ks.cfg /
[root@localhost ~]# hadoop fs -ls /
Found 5 items
-rw-r--r--   1 root   supergroup       2388 2018-04-11 05:30 /anaconda-ks.cfg
-rwxrwxrwx   1 hadoop supergroup      37667 2018-04-11 03:29 /install.log
drwx------   - root   supergroup          0 2018-04-11 03:54 /tmp
drwxr-xr-x   - root   supergroup          0 2018-04-11 03:54 /user
drwxr-xr-x   - root   supergroup          0 2018-04-11 04:10 /wordcount

 

 

 

文件下载    -copyToLocal (相当于  -get )

[root@localhost test]# hadoop fs -get /install.log
[root@localhost test]# ls
install.log
[root@localhost test]# hadoop fs -copyToLocal /anaconda-ks.cfg
[root@localhost test]# ls
anaconda-ks.cfg  install.log

 

 

 

文件复制:-cp  (直接从hdfs中复制)

[root@localhost test]# hadoop fs -cp /install.log /wordcount/
[root@localhost test]# hadoop fs -ls /
Found 5 items
-rw-r--r--   1 root   supergroup       2388 2018-04-11 05:30 /anaconda-ks.cfg
-rwxrwxrwx   1 hadoop supergroup      37667 2018-04-11 03:29 /install.log
drwx------   - root   supergroup          0 2018-04-11 03:54 /tmp
drwxr-xr-x   - root   supergroup          0 2018-04-11 03:54 /user
drwxr-xr-x   - root   supergroup          0 2018-04-11 05:35 /wordcount
[root@localhost test]# hadoop fs -ls /wordcount/
Found 3 items
drwxr-xr-x   - root supergroup          0 2018-04-11 04:08 /wordcount/input
-rw-r--r--   1 root supergroup      37667 2018-04-11 05:35 /wordcount/install.log
drwxr-xr-x   - root supergroup          0 2018-04-11 04:10 /wordcount/output

 

 

 

查看目录与文件大小: -du 

[root@localhost test]# hadoop fs -du -s -h hdfs://localhost:9000/*  #查看根目录下每个文件夹与文件的大小(这里需要写全路径)
2.3 K  hdfs://localhost:9000/anaconda-ks.cfg
36.8 K  hdfs://localhost:9000/install.log
269.6 K  hdfs://localhost:9000/tmp
0  hdfs://localhost:9000/user
36.8 K  hdfs://localhost:9000/wordcount

 

 

 

创建目录: -mkdir (不会递归创建目录)

[root@localhost test]# hadoop fs -mkdir /aa
[root@localhost test]# hadoop fs -ls /
Found 6 items
drwxr-xr-x   - root   supergroup          0 2018-04-11 05:58 /aa
-rw-r--r--   1 root   supergroup       2388 2018-04-11 05:30 /anaconda-ks.cfg
-rwxrwxrwx   1 hadoop supergroup      37667 2018-04-11 03:29 /install.log
drwx------   - root   supergroup          0 2018-04-11 03:54 /tmp
drwxr-xr-x   - root   supergroup          0 2018-04-11 03:54 /user
drwxr-xr-x   - root   supergroup          0 2018-04-11 05:35 /wordcount

 

 

删除文件与文件夹: -rm (删除文件夹需要加-r参数,表示递归删除)

[root@localhost test]# hadoop fs -rm -f /anaconda-ks.cfg

 

 

[root@localhost test]# hadoop fs -rm -f -r /aa

 

 

 

移动文件:从本地移到hdfs,从hdfs移到本地,在hdfs中移动

        [-moveFromLocal <localsrc> ... <dst>]
        [-moveToLocal <src> <localdst>]
        [-mv <src> ... <dst>]

 

 

 

观察尾部日志与实时刷新日志:(类似于linux的tail -f)

        [-tail [-f] <file>]

 

 

 

常见的几个命令:

    1.0查看帮助
        hadoop fs -help <cmd>
    1.1上传
        hadoop fs -put <linux上文件> <hdfs上的路径>
    1.2查看文件内容
        hadoop fs -cat <hdfs上的路径>
    1.3查看文件列表
        hadoop fs -ls /
    1.4下载文件
        hadoop fs -get <hdfs上的路径> <linux上文件>

 

【当你用心写完每一篇博客之后,你会发现它比你用代码实现功能更有成就感!】