namenode, datanode not list by using jps

前端 未结 6 1754
走了就别回头了
走了就别回头了 2021-01-03 02:50

Environment: ubuntu 14.04, hadoop 2.6

After I type the start-all.sh and jps, DataNode doesn\'t list on the terminal



        
相关标签:
6条回答
  • 2021-01-03 02:57

    Faced same problem: Namenode service not showing in Jps command Solution: Its due to permission problem with directory /usr/local/hadoop_store/hdfs just change the permission and format namenode and restart the hadoop:

    $sudo chmod -R 755 /usr/local/hadoop_store/hdfs

    $hadoop namenode -format

    $start-all.sh

    $jps

    0 讨论(0)
  • 2021-01-03 03:00

    I faced the similar problem, jps was not showing datanode.

    Removing the content of hdfs folder and changing folder permission worked out for me.

    sudo rm -r /usr/local/hadoop_store/hdfs/*
    sudo chmod -R 755 /usr/local/hadoop_store/hdfs    
    hadoop namenode =format
    start-all.sh
    jps
    
    0 讨论(0)
  • 2021-01-03 03:03

    One thing to remember when setting up the permission:---- ssh-keygen -t rsa -P "" The above command should be entered in namenode only. and then the generated public key should be added to all data node ssh-copy-id -i ~/.ssh/id_rsa.pub and then press the command ssh permission will set ...... after that no password will require at the time of starting dfs......

    0 讨论(0)
  • 2021-01-03 03:03

    For this you need to give permission to you hdfc folder. Then run below commands:

    1. create a group by command: sudo adgroup hadoop
    2. add ur user into this: sudo usermod -a -G hadoop "ur_user" ( u can see current user by Who command)
    3. Now change the owner ship of this hadoop_store directly by: sudo chown -R "ur_user":"ur_gourp" /usr/local/hadoop_store
    4. then format name node again by: hdfs namenode -format

    and start all services you can see the result.....now type JPS (it will work).

    0 讨论(0)
  • 2021-01-03 03:07

    FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/"

    This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/datanode/ folder.

    FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.

    This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/namenode folder or it does not exist. To rectify this problem follow these options:

    OPTION I:

    If you don't have the folder /usr/local/hadoop_store/hdfs, then create and give permission to the folder as follows:

    sudo mkdir /usr/local/hadoop_store/hdfs
    sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
    sudo chmod -R 755 /usr/local/hadoop_store/hdfs
    

    Change hadoopuser and hadoopgroup to your hadoop username and hadoop groupname respectively. Now, try to start the hadoop processes. If the problem still persists, try option 2.

    OPTION II:

    Remove the contents of /usr/local/hadoop_store/hdfs folder:

    sudo rm -r /usr/local/hadoop_store/hdfs/*
    

    Change folder permission:

    sudo chmod -R 755 /usr/local/hadoop_store/hdfs
    

    Now, start the hadoop processes. It should work.

    NOTE: Post the new logs if error persists.

    UPDATE:

    In case you haven't created the hadoop user and group, do it as follows:

    sudo addgroup hadoop
    sudo adduser --ingroup hadoop hadoop
    

    Now, change ownership of /usr/local/hadoop and /usr/local/hadoop_store:

    sudo chown -R hadoop:hadoop /usr/local/hadoop
    sudo chown -R hadoop:hadoop /usr/local/hadoop_store
    

    Change your user to hadoop:

    su - hadoop
    

    Enter your hadoop user password. Now your terminal should be like:

    hadoop@ubuntu:$

    Now, type:

    $HADOOP_HOME/bin/start-all.sh

    or

    sh /usr/local/hadoop/bin/start-all.sh

    0 讨论(0)
  • 2021-01-03 03:15

    Solution is first stop your namenode using go to your /usr/local/hadoop

    bin/hdfs namenode -format

    then delete hdfs and tmp directory from your home

    mkdir ~/tmp
    mkdir ~/hdfs
    chmod 750 ~/hdfs
    

    goto hadoop directory and start hadoop

    `sbin/start-dfs.sh`
    

    it will show the datanode

    0 讨论(0)
提交回复
热议问题