Setting fs.default.name in core-site.xml Sets HDFS to Safemode

后端 未结 2 1870
死守一世寂寞
死守一世寂寞 2020-12-30 12:11

I installed the Cloudera CDH4 distribution on a single machine in pseudo-distributed mode and successfully tested that it was working correctly (e.g. can run MapReduce progr

相关标签:
2条回答
  • 2020-12-30 12:36

    Safemode is an HDFS state in which the file system is mounted read-only; no replication is performed, nor can files be created or deleted. Filesystem operations that access the filesystem metadata like 'ls' in you case will work.

    The Namenode can be manually forced to leave safemode with this command( $ hadoop dfsadmin -safemode leave).Verify status of safemode with ( $ hadoop dfsadmin -safemode get)and then run dfsadmin report to see if it shows data.If after getting out of safe mode the report still dose not show any data then i'm suspecting communication between namenode and datanode is not hapenning. Check namenode and datanode logs after this step.

    The next steps could be to try restarting datanode process and last resort will be to format namenode which will result in loss of data.

    0 讨论(0)
  • 2020-12-30 12:38

    The issue stemmed from domain name resolution. The /etc/hosts file needed to be modified to point the IP address of the machine of the hadoop machine for both localhost and the fully qualified domain name.

    192.168.0.201 hadoop.fully.qualified.domain.com localhost
    
    0 讨论(0)
提交回复
热议问题