Writing to HDFS from Java, getting “could only be replicated to 0 nodes instead of minReplication”

后端 未结 11 869
时光取名叫无心
时光取名叫无心 2021-02-01 18:59

I’ve downloaded and started up Cloudera\'s Hadoop Demo VM for CDH4 (running Hadoop 2.0.0). I’m trying to write a Java program that will run from my windows 7 machine (The same

11条回答
  •  借酒劲吻你
    2021-02-01 19:19

    I got a same problem.
    In my case, a key of the problem was following error message.
    There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

    It means that your hdfs-client couldn't connect to your datanode with 50010 port. As you connected to hdfs namenode, you could got a datanode's status. But, your hdfs-client would failed to connect to your datanode.

    (In hdfs, a namenode manages file directories, and datanodes. If hdfs-client connect to a namnenode, it will find a target file path and address of datanode that have the data. Then hdfs-client will communicate with datanode. (You can check those datanode uri by using netstat. because, hdfs-client will be trying to communicate with datanodes using by address informed by namenode)

    I solved that problem by:

    1. opening 50010(dfs.datanode.address) port in a firewall.
    2. adding propertiy "dfs.client.use.datanode.hostname", "true"
    3. adding hostname to hostfile in my client PC.

    I'm sorry for my poor English skill.

提交回复
热议问题