错误

[root@hadoop test]# hadoop jar hadoop.jarcom.hadoop.hdfs.CopyToHDFS 14/01/26 10:20:00 WARN hdfs.DFSClient: DataStreamerException: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File/user/hadoop/test/01/hello.txt could only be replicated to 0 nodes, instead of1          atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)          atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)          atsun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)   14/01/26 10:20:00 WARN hdfs.DFSClient: Error Recoveryfor block null bad datanode[0] nodes == null 14/01/26 10:20:00 WARN hdfs.DFSClient: Could not getblock locations. Source file "/user/hadoop/test/01/hello.txt" -Aborting... Exception in thread "main"org.apache.hadoop.ipc.RemoteException: java.io.IOException: File/user/hadoop/test/01/hello.txt could only be replicated to 0 nodes, instead of1          atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)          atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)          atsun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)          atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)          14/01/26 10:20:00 ERROR hdfs.DFSClient: Failed to closefile /user/hadoop/test/01/hello.txt org.apache.hadoop.ipc.RemoteException:java.io.IOException: File /user/hadoop/test/01/hello.txt could only bereplicated to 0 nodes, instead of 1          atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)          atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)          atsun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)          atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)          ient.java:2989)



2.究其原因并制定解决方案:

原因:

多次格式化hadoop导致版本信息不一致,修改为一致状态即可解决问题

解决

1、先把服务都停掉 stop-all.sh

2、格式化namenode hadoop namenode -foramt

3、重新启动所有服务 start-all.sh

4、可以进行正常操作了

注:

这里格式化很可能出现格式失败的情况,这里就不在细说失败的原有和解决方法,不清楚的同学可以参考博文:http://blog.csdn.net/yangkai_hudong/article/details/18731395


3.网上其它相关的解决资料

1,现象:flume再往​​Hadoop​​ HDFS写文件时flume.log报错 be replicated to 0 nodes, instead of 1

2012-12-18 13:47:24,673 WARN hdfs.BucketWriter: CaughtIOException writing to HDFSWriter (java.io.IOException: File/logdata/20121218/bj4aweb04/8001_4A_ACA/8001_4A_ACA.1355799411582.tmp couldonly be replicated to 0 nodes, instead of 1

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)

        atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)

2,查看相关进程状态 datanode没有正常启动

 [hadoop@dtydb6 hadoop]$ jps

7427 Jps

7253 TaskTracker

6576 NameNode

6925 SecondaryNameNode

7079 JobTracker


 3,查看datanode的日志

Incompatible namespaceIDs

 java.io.IOException: Incompatible namespaceIDs in /hadoop/logdata:namenode namespaceID = 13513664; datanode namespaceID = 525507667

4,根据报错信息定位到namespaceIDs版本不一致

根据参考文档的解决方案,原因是多次格式化hadoop导致版本信息不一致,修改为一致状态即可解决问题

解决的办法很简单,两个方案

1).所有的datanode删掉,重新建(很麻烦,但看你了)

2)登上datanode,把位于{dfs.data.dir}/current/VERSION中的namespaceID改为最新的版本即可

[hadoop@dtydb6 current]$ cat VERSION

#Fri Dec 14 09:37:22 CST 2012

namespaceID=525507667

storageID=DS-120876865-10.4.124.236-50010-1354772633249

cTime=0

storageType=DATA_NODE

layoutVersion=-32

5,重新启动hadoop,datanode已经成功启动

[hadoop@dtydb6 current]$ jps

8770 JobTracker

8436 DataNode

8266 NameNode

8614 SecondaryNameNode