Hadoop: …be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation

后端 未结 10 1858
走了就别回头了
走了就别回头了 2020-12-03 02:41

I\'m getting the following error when attempting to write to HDFS as part of my multi-threaded application

could only be replicated to 0 nodes instead of min         


        
相关标签:
10条回答
  • 2020-12-03 03:28

    In my case it was a storage policy of output path set to COLD.

    How to check settings of your folder:

    hdfs storagepolicies -getStoragePolicy -path my_path
    

    In my case it returned

    The storage policy of my_path
    BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}   
    

    I dumped the data else where (to HOT storage) and the issue went away.

    0 讨论(0)
  • 2020-12-03 03:29

    Check if the jps command on the computers which run the datanodes show that the datanodes are running. If they are running, then it means that they could not connect with the namenode and hence the namenode thinks there are no datanodes in the hadoop system.

    In such a case, after running start-dfs.sh, run netstat -ntlp in the master node. 9000 is the port number most tutorials tells you to specify in core-site.xml. So if you see a line like this in the output of netstat

    tcp        0      0 120.0.1.1:9000        0.0.0.0:*               LISTEN       4209/java
    

    then you have a problem with the host alias. I had the same problem, so I'll state how it was resolved.

    This is the contents of my core-site.xml

    <configuration>
       <property>
           <name>fs.default.name</name>
           <value>hdfs://vm-sm:9000</value>
       </property>
    </configuration>
    

    So the vm-sm alias in the master computer maps to the 127.0.1.1. This is because of the setup of my /etc/hosts file.

    127.0.0.1       localhost
    127.0.1.1       vm-sm
    192.168.1.1     vm-sm
    192.168.1.2     vm-sw1
    192.168.1.3     vm-sw2
    

    Looks like the core-site.xml of the master system seemed to have mapped on the the 120.0.1.1:9000 while that of the worker nodes are trying to connect through 192.168.1.1:9000.

    So I had to change the alias of the master node for the hadoop system (just removed the hyphen) in the /etc/hosts file

    127.0.0.1       localhost
    127.0.1.1       vm-sm
    192.168.1.1     vmsm
    192.168.1.2     vm-sw1
    192.168.1.3     vm-sw2
    

    and reflected the change in the core-site.xml, mapred-site.xml, and slave files (wherever the old alias of the master occurred).

    After deleting the old hdfs files from the hadoop location as well as the tmp folder and restarting all nodes, the issue was solved.

    Now, netstat -ntlp after starting DFS returns

    tcp        0      0 192.168.1.1:9000        0.0.0.0:*               LISTEN ...
    ...
    
    0 讨论(0)
  • 2020-12-03 03:38

    You may leave HDFS safe mode:

    hdfs dfsadmin -safemode forceExit
    
    0 讨论(0)
  • 2020-12-03 03:38

    Got this error as Data Node was not running. To resolve this on VM

    1. Removed Name/Data Node directories
    2. Re-Created the directories
    3. Formatted the name node & data node(not required)hadoop namenode -format
    4. Restarted the service start-dfs.sh
    5. Now jps shows both Name & Data nodes and Sqoop job worked successfully
    0 讨论(0)
提交回复
热议问题