I have a Hadoop cluster with 18 data nodes. I restarted the name node over two hours ago and the name node is still in safe mode.
I have been searching for why this migh
I had it once, where some blocks were never reported in. I had to forcefully let the namenode leave safemode (hadoop dfsadmin -safemode leave
) and then run an fsck to delete missing files.
Check the properties dfs.namenode.handler.count in hdfs-site.xml.
dfs.namenode.handler.count in hdfs-site.xml specifies the number of threads used by Namenode for it’s processing. its default value is 10. Too low value of this properties might cause the issue specified.
Also check the missing or corrupt blocks hdfs fsck / | egrep -v '^.+$' | grep -v replica
hdfs fsck /path/to/corrupt/file -locations -blocks -files
if the corrupt blocks are found, remove it. hdfs fs -rm /file-with-missing-corrupt blocks.