Environment: ubuntu 14.04, hadoop 2.6
After I type the start-all.sh
and jps
, DataNode
doesn\'t list on the terminal
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/"
This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/datanode/
folder.
FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/namenode
folder or it does not exist. To rectify this problem follow these options:
OPTION I:
If you don't have the folder /usr/local/hadoop_store/hdfs
, then create and give permission to the folder as follows:
sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
Change hadoopuser
and hadoopgroup
to your hadoop username and hadoop groupname respectively. Now, try to start the hadoop processes. If the problem still persists, try option 2.
OPTION II:
Remove the contents of /usr/local/hadoop_store/hdfs
folder:
sudo rm -r /usr/local/hadoop_store/hdfs/*
Change folder permission:
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
Now, start the hadoop processes. It should work.
NOTE: Post the new logs if error persists.
UPDATE:
In case you haven't created the hadoop user and group, do it as follows:
sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop
Now, change ownership of /usr/local/hadoop
and /usr/local/hadoop_store
:
sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store
Change your user to hadoop:
su - hadoop
Enter your hadoop user password. Now your terminal should be like:
hadoop@ubuntu:$
Now, type:
$HADOOP_HOME/bin/start-all.sh
or
sh /usr/local/hadoop/bin/start-all.sh