文件操作
手动创建文件夹:
语法:
hadoop fs -mkdir <HDFS文件路径>
示例:
# 创建文件夹(根目录创建名为input的文件夹)
[root@master ~]# hadoop fs -mkdir /input
# 创建文件夹(根目录创建名为user的文件夹)
[root@master ~]# hdfs dfs -mkdir /user
# 创建多级目录
[root@master ~]# hdfs dfs -mkdir -p /user/resource/example
手动上传文件
语法:
hadoop fs -put <本地文件路径> <HDFS文件路径>
示例:
# 将本地/usr/text.txt 文件上传到input文件下
[root@master ~]# hadoop fs -put /usr/text.txt /input
# 将本地/usr/text.txt 文件上传到input文件下。-copyFromLocal:本地复制
[root@master ~]# hdfs dfs -copyFromLocal /usr/text.txt /user
# 将本地/usr/text.txt 文件上传到input文件下
[root@master ~]# hdfs dfs -put /usr/text.txt /input
# 将本地/usr/text.txt 文件上传到input文件下。-moveFromLocal:本地迁移
[root@master ~]# hdfs dfs -moveFromLocal /usr/text.txt /user
查看文件
[root@master ~]# hadoop fs -ls /
[root@master ~]# hdfs dfs -ls /
下载文件
语法:
hadoop fs -get <HDFS文件路径> <本地路径>
hadoop fs -copyToLocal <HDFS文件路径> <本地路径>
示例:
# 将user/text.txt文件下载到本地/usr/local/下 -copyToLocal:复制到本地
[root@master ~]# hadoop fs -copyToLocal /user/text.txt /usr/local/
[root@master ~]# cd /usr/local/
[root@master local]# ll
-rw-r--r--. 1 root root 0 5月 22 09:51 text.txt
[root@master ~]# hdfs dfs -copyToLocal /user/text.txt /usr/local/
[root@master ~]# cd /usr/local/
[root@master local]# ll
-rw-r--r--. 1 root root 0 5月 22 09:51 text.txt
# 将user/text.txt文件下载到本地/usr/local/下
[root@master local]# hadoop fs -get /user/resource/text.txt /usr/local
[root@master local]# ll
-rw-r--r--. 1 root root 0 5月 22 09:54 text.txt
# 将user/text.txt文件下载到本地/usr/local/下
[root@master local]# hdfs dfs -get /user/resource/text.txt /usr/local
[root@master local]# ll
-rw-r--r--. 1 root root 0 5月 22 09:54 text.txt
查看文件内容
语法:
hdfs dfs -cat <HDFS文件路径>
示例:
[root@master local]# hdfs dfs -cat /input/text.txt
hello ,hadoop
[root@master local]# hdfs dfs -tail /input/text.txt
hello ,hadoop
删除文件
语法:
hadoop fs -rm <HDFS文件路径>
示例:
# 删除文件夹
[root@master ~]# hdfs dfs -mkdir /user/resource
[root@master ~]# hdfs dfs -rmdir /user/resource
[root@master ~]# hadoop fs -rm -f /user/resource
# 删除文件
[root@master ~]# hdfs dfs -rm /user/resoure/text.txt
[root@master ~]# hadoop fs -rm -r /user/resource/text.txt
文件重命名
语法:
hadoop fs -mv <HDFS文件路径> <HDFS文件路径>
示例:
[root@master ~]# hadoop fs -mv /input/test.txt /input/demo.txt
查看集群的基本信息
[root@master ~]# hdfs fsck /
Connecting to namenode via http://192.168.184.130:50070/fsck?ugi=root&path=%2F
FSCK started by root (auth:SIMPLE) from /192.168.184.130 for path / at Tue May 23 10:42:27 CST 2023
/input/text.txt: Under replicated BP-399935676-192.168.184.130-1684307575827:blk_1073741825_1001. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: HEALTHY
Number of data-nodes: 1
Number of racks: 1
Total dirs: 6
Total symlinks: 0
Replicated Blocks:
Total size: 13 B
Total files: 3
Total blocks (validated): 1 (avg. block size 13 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 1.0
Missing blocks: 0
Corrupt blocks: 0