Namenode not getting started

前端 未结 21 2042
暗喜
暗喜 2020-11-28 19:32

I was using Hadoop in a pseudo-distributed mode and everything was working fine. But then I had to restart my computer because of some reason. And now when I am trying to st

相关标签:
21条回答
  • 2020-11-28 19:58

    Faced the same problem.

    (1) Always check for the typing mistakes in the configuring the .xml files, especially the xml tags.

    (2) go to bin dir. and type ./start-all.sh

    (3) then type jps , to check if processes are working

    0 讨论(0)
  • 2020-11-28 19:59

    After deleting a resource managers' data folder, the problem is gone.
    Even if you have formatting cannot solve this problem.

    0 讨论(0)
  • 2020-11-28 20:02

    Instead of formatting namenode, may be you can use the below command to restart the namenode. It worked for me:

    sudo service hadoop-master restart

    1. hadoop dfsadmin -safemode leave
    0 讨论(0)
  • 2020-11-28 20:03

    I was facing the issue of namenode not starting. I found a solution using following:

    1. first delete all contents from temporary folder: rm -Rf <tmp dir> (my was /usr/local/hadoop/tmp)
    2. format the namenode: bin/hadoop namenode -format
    3. start all processes again:bin/start-all.sh

    You may consider rolling back as well using checkpoint (if you had it enabled).

    0 讨论(0)
  • 2020-11-28 20:04

    Add hadoop.tmp.dir property in core-site.xml

    <configuration>
      <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
      </property>
      <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/yourname/hadoop/tmp/hadoop-${user.name}</value>
      </property>
    </configuration>
    

    and format hdfs (hadoop 2.7.1):

    $ hdfs namenode -format
    

    The default value in core-default.xml is /tmp/hadoop-${user.name}, which will be deleted after reboot.

    0 讨论(0)
  • 2020-11-28 20:04
    I got the solution just share with you that will work who got the errors:
    
    1. First check the /home/hadoop/etc/hadoop path, hdfs-site.xml and
    
     check the path of namenode and datanode 
    
    <property>
      <name>dfs.name.dir</name>
        <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
    </property>
    
    <property>
      <name>dfs.data.dir</name>
        <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
    </property>
    
    2.Check the permission,group and user of namenode and datanode of the particular path(/home/hadoop/hadoopdata/hdfs/datanode), and check if there are any problems in all of them and if there are any mismatch then correct it. ex .chown -R hadoop:hadoop in_use.lock, change user and group
    
    chmod -R 755 <file_name> for change the permission
    
    0 讨论(0)
提交回复
热议问题