There are 0 datanode(s) running and no node(s) are excluded in this operation

后端 未结 14 1701
清歌不尽
清歌不尽 2020-11-27 14:54

I have set up a multi node Hadoop Cluster. The NameNode and Secondary namenode runs on the same machine and the cluster has only one Datanode. All the nodes are configured o

相关标签:
14条回答
  • 2020-11-27 15:18

    Value for property {fs.default.name} in core-site.xml, on both the master and slave machine, must point to master machine. So it will be something like this:

    <property>
         <name>fs.default.name</name>
         <value>hdfs://master:9000</value>
    </property>
    

    where master is the hostname in /etc/hosts file pointing to the master node.

    0 讨论(0)
  • 2020-11-27 15:19

    It is probably because the cluster ID of the datanodes and the namenodes or node manager do not match. The cluster ID can be seen in the VERSION file found in both the namenode and datanodes .

    This happens when you format your namenode and then restart the cluster but the datanodes still try connecting using the previous clusterID . to be successfully connected you need the correct IP address and also a matching cluster ID on the nodes.

    So try reformatting the namenode and datanodes or just configure the datanodes and namenode on newly created folders.

    That should solve your problem.

    Deleting the files from the current datanodes folder will also remove the old VERSION file and will request for a new VERSION file while reconnecting with the namenode.

    Example you datanode directory in the configuration is /hadoop2/datanode

    $ rm -rvf /hadoop2/datanode/*
    

    And then restart services If you do reformat your namenode do it before this step. Each time you reformat your namenode it gets a new ID and that ID is randomly generated and will not match the old ID in your datanodes

    So every time follow this sequence

    if you Format namenode then Delete the contents of datanode directory OR configure datanode on newly created directory Then start your namenode and the datanodes

    0 讨论(0)
  • 2020-11-27 15:21

    I had same error. I had not permission to hdfs file system. So I give permission to my user:

    chmod 777 /usr/local/hadoop_store/hdfs/namenode
    chmod 777 /usr/local/hadoop_store/hdfs/datanode
    
    0 讨论(0)
  • 2020-11-27 15:23

    Two things worked for me,

    STEP 1 : stop hadoop and clean temp files from hduser

    sudo rm -R /tmp/*
    

    also, you may need to delete and recreate /app/hadoop/tmp (mostly when I change hadoop version from 2.2.0 to 2.7.0)

    sudo rm -r /app/hadoop/tmp
    sudo mkdir -p /app/hadoop/tmp
    sudo chown hduser:hadoop /app/hadoop/tmp
    sudo chmod 750 /app/hadoop/tmp
    

    STEP 2: format namenode

    hdfs namenode -format
    

    Now, I can see DataNode

    hduser@prayagupd:~$ jps
    19135 NameNode
    20497 Jps
    19477 DataNode
    20447 NodeManager
    19902 SecondaryNameNode
    20106 ResourceManager
    
    0 讨论(0)
  • 2020-11-27 15:24

    Have you tried clearing the /tmp folder.

    Before cleanup a datanode did not come up

    86528 SecondaryNameNode
    87719 Jps
    86198 NameNode
    78968 RunJar
    79515 RunJar
    63964 RunNiFi
    63981 NiFi
    

    After cleanup

    sudo rm -rf /tmp/*
    

    It worked for me

    89200 Jps
    88859 DataNode
    
    0 讨论(0)
  • 2020-11-27 15:25

    On my situation, firewalld service was running. It was on default configuration. And it don't allow the communication between nodes. My hadoop cluster was a test cluster. Because of this, I stopped the service. If your servers are in production, you should allow hadoop ports on firewalld, instead of

    service firewalld stop
    chkconfig firewalld off
    
    0 讨论(0)
提交回复
热议问题