I was using Hadoop in a pseudo-distributed mode and everything was working fine. But then I had to restart my computer because of some reason. And now when I am trying to st
If you facing this issue after rebooting the system, Then below steps will work fine
For workaround.
1) format the namenode: bin/hadoop namenode -format
2) start all processes again:bin/start-all.sh
For Perm fix: -
1) go to /conf/core-site.xml change fs.default.name to your custom one.
2) format the namenode: bin/hadoop namenode -format
3) start all processes again:bin/start-all.sh
Why do most answers here assume that all data needs to be deleted, reformatted, and then restart Hadoop? How do we know namenode is not progressing, but taking lots of time. It will do this when there is a large amount of data in HDFS. Check progress in logs before assuming anything is hung or stuck.
$ [kadmin@hadoop-node-0 logs]$ tail hadoop-kadmin-namenode-hadoop-node-0.log
...
016-05-13 18:16:44,405 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 117/141 transactions completed. (83%)
2016-05-13 18:16:56,968 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 121/141 transactions completed. (86%)
2016-05-13 18:17:06,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 122/141 transactions completed. (87%)
2016-05-13 18:17:38,321 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 123/141 transactions completed. (87%)
2016-05-13 18:17:56,562 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 124/141 transactions completed. (88%)
2016-05-13 18:17:57,690 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 127/141 transactions completed. (90%)
This was after nearly an hour of waiting on a particular system. It is still progressing each time I look at it. Have patience with Hadoop when bringing up the system and check logs before assuming something is hung or not progressing.
Try this,
1) Stop all hadoop processes : stop-all.sh
2) Remove the tmp folder manually
3) Format namenode : hadoop namenode -format
4) Start all processes : start-all.sh
hadoop.tmp.dir
in the core-site.xml is defaulted to /tmp/hadoop-${user.name}
which is cleaned after every reboot. Change this to some other directory which doesn't get cleaned on reboot.
Did you change conf/hdfs-site.xml
dfs.name.dir
?
Format namenode after you change it.
$ bin/hadoop namenode -format
$ bin/hadoop start-all.sh
If anyone using hadoop1.2.1 version and not able to run namenode, go to core-site.xml
, and change dfs.default.name
to fs.default.name
.
And then format the namenode using $hadoop namenode -format
.
Finally run the hdfs using start-dfs.sh
and check for service using jps..