There are 0 datanode(s) running and no node(s) are excluded in this operation

后端 未结 14 1699
清歌不尽
清歌不尽 2020-11-27 14:54

I have set up a multi node Hadoop Cluster. The NameNode and Secondary namenode runs on the same machine and the cluster has only one Datanode. All the nodes are configured o

相关标签:
14条回答
  • 2020-11-27 15:03

    In my situation, I was missing the necessary properties inside hdfs-site.xml (Hadoop 3.0.0) installed using HomeBrew on MacOS. (The file:/// is not a typo.)

    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///usr/local/Cellar/hadoop/hdfs/namenode</value>
    </property>
    
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///usr/local/Cellar/hadoop/hdfs/datanode</value>
    </property>
    
    0 讨论(0)
  • 2020-11-27 15:04

    I had obtained the same error, in my case it was due to a bad configuration of the hosts files, first I have modified the hosts file of the master node adding the IPs of the slaves and also in each DataNode, I have modified the Hosts files to indicate the IPs of the NameNode and the rest of slaves.

    Same think like this

    adilazh1@master:~$ sudo cat /etc/hosts
    [sudo] contraseña para adilazh1:
    127.0.0.1       localhost
    192.168.56.100  master
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    192.168.56.101  slave1
    192.168.56.102  slave2
    

    Example slave1's hosts file

    127.0.0.1       localhost
    192.168.56.101  slave1
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    192.168.56.100  master
    192.168.56.102  slave2
    
    0 讨论(0)
  • I had the same problem after improper shutdown of the node. Also checked in the UI the datanode is not listed.

    Now it's working after deleting the files from datanode folder and restarting services.

    stop-all.sh

    rm -rf /usr/local/hadoop_store/hdfs/datanode/*

    start-all.sh

    0 讨论(0)
  • 2020-11-27 15:09

    @mustafacanturk solution, disabling the firewall worked for me. I thought that datanodes started because they appeared up when running jps but when trying to upload files i was receiving the message "0 nodes running". In fact neither the web interface to (http://nn1:50070) was working because of the firewall. I disabled the firewall when installing hadoop but for some reason it was up. Neverthelsess sometimes cleaning or recreating the temp folders (hadoop.tmp.dir) or even dfs.data.dir and dfs.namenode.name.dir folders and reformating the name server was the solution.

    0 讨论(0)
  • 2020-11-27 15:12

    Maybe the service of firewall hasn't been stopped.

    0 讨论(0)
  • 2020-11-27 15:13

    1) Stop all services first using command stop-all.sh

    2) Delete all files inside datanode rm -rf /usr/local/hadoop_store/hdfs/datanode/*

    3) Then start all services using command start-all.sh

    You can check if all of your services are running using jps command

    Hope this should work!!!

    0 讨论(0)
提交回复
热议问题