I am implementing the Hadoop Single Node Cluster on my machine by following Michael Noll\'s tutorial and have come across data replication error:
Here\'s the full error
Although solved, I'm adding this for future readers. Cody's advice of inspecting the start of namenode and datanode was useful, and further investigation led me to delete the hadoop-store/dfs directory. Doing this solved this error for me.
I had the same problem, I took a look at the datanode logs and there was a warning saying that the dfs.data.dir had incorrect permissions... so I just changed them and everything worked, which is kind of weird.
Specifically, my "dfs.data.dir" was set to "/home/hadoop/hd_tmp", and the error I got was:
...
...
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /home/hadoop/hd_tmp/dfs/data, expected: rwxr-xr-x, while actual: rwxrwxr-x
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
...
...
So I simply executed these commands:
And then everything worked fine.
In my case, I wrongly set one destination for dfs.name.dir
and dfs.data.dir
. The correct format is
<property>
<name>dfs.name.dir</name>
<value>/path/to/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/path/to/data</value>
</property>
I removed the extra properties in the hdfs-site.xml and then this issue was gone. Hadoop needs to improve on their error messages. I tried each of the above solutions and none worked.
Look at your namenode (probably http://localhost:50070) and see how many datanodes it says you have.
If it is 0, then either your datanode isn't running or it isn't configured to connect to the namenode.
If it is 1, check to see how much free space it says there is in the DFS. It may be that the data node doesn't have anywhere it can write data to (data dir doesn't exist, or doesn't have write permissions).
The solution that worked for me was to run namenode and datanode one by one and not together using bin/start-all.sh
. What happens using this approach is that the error is clearly visible if you are having some problem setting the datanodes on the network and also many posts on stackoverflow suggest that namenode requires some time to start-off, therefore, it should be given some time to start before starting the datanodes. Also, in this case I was having problem with different ids of namenode and datanodes for which I had to change the ids of the datanode with same id as the namenode.
The step by step procedure will be:
bin/hadoop namenode
. Check for errors, if any.bin/hadoop datanode
. Check for errors, if any.