Data Replication error in Hadoop

后端 未结 8 623
清酒与你
清酒与你 2021-01-31 10:22

I am implementing the Hadoop Single Node Cluster on my machine by following Michael Noll\'s tutorial and have come across data replication error:

Here\'s the full error

相关标签:
8条回答
  • 2021-01-31 10:32

    Although solved, I'm adding this for future readers. Cody's advice of inspecting the start of namenode and datanode was useful, and further investigation led me to delete the hadoop-store/dfs directory. Doing this solved this error for me.

    0 讨论(0)
  • 2021-01-31 10:41

    I had the same problem, I took a look at the datanode logs and there was a warning saying that the dfs.data.dir had incorrect permissions... so I just changed them and everything worked, which is kind of weird.

    Specifically, my "dfs.data.dir" was set to "/home/hadoop/hd_tmp", and the error I got was:

    ...
    ...
    WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /home/hadoop/hd_tmp/dfs/data, expected: rwxr-xr-x, while actual: rwxrwxr-x
    ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
    ...
    ...
    

    So I simply executed these commands:

    • I stopped all the demons with "bin/stop-all.sh"
    • Change the permissions of the directory with "chmod -R 755 /home/hadoop/hd_tmp"
    • I gave format again to the namenode with "bin/hadoop namenode -format".
    • I re-started the demons "bin/start-all.sh"
    • And voilà, the datanode was up and running! (I checked it with the command "jsp", where a process named DataNode was shown).

    And then everything worked fine.

    0 讨论(0)
  • 2021-01-31 10:42

    In my case, I wrongly set one destination for dfs.name.dir and dfs.data.dir. The correct format is

     <property>
     <name>dfs.name.dir</name>
     <value>/path/to/name</value>
     </property>
    
     <property>
     <name>dfs.data.dir</name>
     <value>/path/to/data</value>
     </property>
    
    0 讨论(0)
  • 2021-01-31 10:42

    I removed the extra properties in the hdfs-site.xml and then this issue was gone. Hadoop needs to improve on their error messages. I tried each of the above solutions and none worked.

    0 讨论(0)
  • 2021-01-31 10:45

    Look at your namenode (probably http://localhost:50070) and see how many datanodes it says you have.

    If it is 0, then either your datanode isn't running or it isn't configured to connect to the namenode.

    If it is 1, check to see how much free space it says there is in the DFS. It may be that the data node doesn't have anywhere it can write data to (data dir doesn't exist, or doesn't have write permissions).

    0 讨论(0)
  • 2021-01-31 10:46

    The solution that worked for me was to run namenode and datanode one by one and not together using bin/start-all.sh. What happens using this approach is that the error is clearly visible if you are having some problem setting the datanodes on the network and also many posts on stackoverflow suggest that namenode requires some time to start-off, therefore, it should be given some time to start before starting the datanodes. Also, in this case I was having problem with different ids of namenode and datanodes for which I had to change the ids of the datanode with same id as the namenode.

    The step by step procedure will be:

    1. Start the namenode bin/hadoop namenode. Check for errors, if any.
    2. Start the datanodes bin/hadoop datanode. Check for errors, if any.
    3. Now start the task-tracker, job tracker using 'bin/start-mapred.sh'
    0 讨论(0)
提交回复
热议问题