Couldn't start hadoop datanode normally

血红的双手。 提交于 2019-12-06 07:50:18

问题


i am trying to install hadoop 2.2.0 i am getting following kind of error while starting dataenode services please help me resolve this issue.Thanks in Advance.

2014-03-11 08:48:16,406 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/prassanna/usr/local/hadoop/yarn_data/hdfs/datanode/in_use.lock acquired by nodename 3627@prassanna-Studio-1558 2014-03-11 08:48:16,426 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-611836968-127.0.1.1-1394507838610 (storage id DS-1960076343-127.0.1.1-50010-1394127604582) service to localhost/127.0.0.1:9000 java.io.IOException: Incompatible clusterIDs in /home/prassanna/usr/local/hadoop/yarn_data/hdfs/datanode: namenode clusterID = CID-fb61aa70-4b15-470e-a1d0-12653e357a10; datanode clusterID = CID-8bf63244-0510-4db6-a949-8f74b50f2be9 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664) at java.lang.Thread.run(Thread.java:662) 2014-03-11 08:48:16,427 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-611836968-127.0.1.1-1394507838610 (storage id DS-1960076343-127.0.1.1-50010-1394127604582) service to localhost/127.0.0.1:9000 2014-03-11 08:48:16,532 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-611836968-127.0.1.1-1394507838610 (storage id DS-1960076343-127.0.1.1-50010-1394127604582) 2014-03-11 08:48:18,532 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 2014-03-11 08:48:18,534 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 2014-03-11 08:48:18,536 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /********************************** SHUTDOWN_MSG: Shutting down DataNode at prassanna-Studio-1558/127.0.1.1


回答1:


That simply shows that the datanode tried to startup but took some exception and died.

Please check the datanode log under the logs folder in the hadoop installation folder (unless you changed that config) for exceptions. It usually points to a configuration issue of some kind, esp. network settings (/etc/hosts) related but there are quite a few possibilities.




回答2:


Make sure you are ready with correct configuration and right path. This is a link for Running Hadoop on ubuntu.

I have used this link to setup hadoop in my machine and it works fine.




回答3:


Refer this,

1.Check JAVA_HOME---

    readlink -f $(which java) 
    /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 

2.If JAVA is not available install by command

    sudo apt-get install defalul-jdk 

than run 1. and check on terminal

    java -version 
    javac -version 

3.Configure SSH

Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it (which is what we want to do in this short tutorial). For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost for the user .

    sudo apt-get install ssh
    sudo su hadoop
    ssh-keygen -t rsa -P “”
    cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
    ssh localhost

Download and extract hadoop-2.7.3(Chosse dirrectory having read write permisson)

Set Environment Variable

    sudo gedit .bashrc
    source .bashrc

Setup Configuration Files

The following files will have to be modified to complete the Hadoop setup:

~/.bashrc   (Already done)
(PATH)/etc/hadoop/hadoop-env.sh 
(PATH)/etc/hadoop/core-site.xml 
(PATH)/etc/hadoop/mapred-site.xml.template 
(PATH)/etc/hadoop/hdfs-site.xm

gedit (PATH)/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

gedit (PATH)/etc/hadoop/core-site.xml: 

The (HOME)/etc/hadoop/core-site.xml file contains configuration properties that Hadoop uses when starting up. This file can be used to override the default settings that Hadoop starts with.

    ($ sudo mkdir -p /app/hadoop/tmp)

Open the file and enter the following in between the <configuration></configuration> tag:

gedit /usr/local/hadoop/etc/hadoop/core-site.xml 
<configuration> 
 <property> 
  <name>hadoop.tmp.dir</name> 
  <value>/app/hadoop/tmp</value> 
  <description>A base for other temporary directories.</description> 
 </property> 

 <property> 
  <name>fs.default.name</name> 
  <value>hdfs://localhost:54310</value> 
  <description>The name of the default file system.  A URI whose 
  scheme and authority determine the FileSystem implementation.  The 
  uri's scheme determines the config property (fs.SCHEME.impl) naming 
  the FileSystem implementation class.  The uri's authority is used to 
  determine the host, port, etc. for a filesystem.</description> 
 </property> 
</configuration>


(PATH)/etc/hadoop/mapred-site.xml 

By default, the (PATH)/etc/hadoop/ folder contains (PATH)/etc/hadoop/mapred-site.xml.template file which has to be renamed/copied with the name mapred-site.xml:

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml 

The mapred-site.xml file is used to specify which framework is being used for MapReduce.

We need to enter the following content in between the <configuration></configuration> tag:

    <configuration> 
     <property> 
      <name>mapred.job.tracker</name> 
      <value>localhost:54311</value> 
      <description>The host and port that the MapReduce job tracker runs 
      at.  If "local", then jobs are run in-process as a single map 
      and reduce task. 
      </description> 
     </property> 
    </configuration>

(PATH)/etc/hadoop/hdfs-site.xml 

The (PATH)/etc/hadoop/hdfs-site.xml file needs to be configured for each host in the cluster that is being used.

It is used to specify the directories which will be used as the namenode and the datanode on that host.

Before editing this file, we need to create two directories which will contain the namenode and the datanode for this Hadoop installation. This can be done using the following commands:

sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode 
sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode 

Open the file and enter the following content in between the <configuration></configuration> tag:

    gedit (PATH)/etc/hadoop/hdfs-site.xml 

    <configuration> 
     <property> 
      <name>dfs.replication</name> 
      <value>1</value> 
      <description>Default block replication. 
      The actual number of replications can be specified when the file is created. 
      The default is used if replication is not specified in create time. 
      </description> 
     </property> 
     <property> 
       <name>dfs.namenode.name.dir</name> 
       <value>file:/usr/local/hadoop_store/hdfs/namenode</value> 
     </property> 
     <property> 
       <name>dfs.datanode.data.dir</name> 
       <value>file:/usr/local/hadoop_store/hdfs/datanode</value> 
     </property> 
    </configuration> 

Format the New Hadoop Filesystem

Now, the Hadoop file system needs to be formatted so that we can start to use it. The format command should be issued with write permission since it creates current directory under /usr/local/hadoop_store/ folder:

    bin/hadoop namenode -format 

or

    bin/hdfs namenode -format

HADOOP SETUP IS DONE

Now start the hdfs

start-dfs.sh
start-yarn.sh

CHECK URL: http://localhost:50070/

FOR STOPPING HDFS

stop-dfs.sh
stop-yarn.sh


来源:https://stackoverflow.com/questions/22240488/couldnt-start-hadoop-datanode-normally

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!