Hadoop上传文件到hdfs报错:org.apache.hadoop.ipc.RemoteException(java.io.IOException)

让人想犯罪 __ 提交于 2019-12-22 02:57:54
搭建好Hadoop集群之后使用hdfs命令上传文件到hdfs报错:
hdfs dfs -put jn_gaj_lgxx.csv /input

报错内容如下所示:

[root@master local]# hdfs dfs -put jn_gaj_lgxx.csv /input
19/08/21 15:55:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applic      able
19/08/21 15:55:42 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/jn_gaj_lgxx.csv._COPYING_ could only be replicated to 0 nodes instead o      f minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1814)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2563)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:846)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB      .java:510)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtoco      lProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1493)
        at org.apache.hadoop.ipc.Client.call(Client.java:1439)
        at org.apache.hadoop.ipc.Client.call(Client.java:1349)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1845)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
put: File /input/jn_gaj_lgxx.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running       and no node(s) are excluded in this operation.
报错内容显示的是我的节点没有完全启动成功,但是我jps查看都显示正确,查看磁盘使用情况的时候发现了如下问题:
hadoop dfsadmin -report

在这里插入图片描述

解决办法(关闭防火墙,所有节点)如下:
  1. 查看防火墙状态: systemctl status firewalld
  2. 关闭防火墙并查看:systemctl stop firewalld(临时关闭)
  3. 设置默认关闭防火墙:systemctl disable firewalld
再次查看磁盘使用情况显示正常
Configured Capacity: 36477861888 (33.97 GB)
Present Capacity: 26147221672 (24.35 GB)
DFS Remaining: 26146455552 (24.35 GB)
DFS Used: 766120 (748.16 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 172.23.217.104:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 383060 (374.08 KB)
Non DFS Used: 5165164460 (4.81 GB)
DFS Remaining: 13073383424 (12.18 GB)
DFS Used%: 0.00%
DFS Remaining%: 71.68%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Aug 21 16:03:54 CST 2019
Last Block Report: Wed Aug 21 15:56:36 CST 2019


Name: 172.23.217.105:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 383060 (374.08 KB)
Non DFS Used: 5165475756 (4.81 GB)
DFS Remaining: 13073072128 (12.18 GB)
DFS Used%: 0.00%
DFS Remaining%: 71.68%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Aug 21 16:03:54 CST 2019
Last Block Report: Wed Aug 21 15:56:36 CST 2019
再次使用hdfs命令上传文件到hdfs,成功!
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!