Hadoop - java.net.ConnectException: Connection refused

故事扮演 提交于 2020-01-12 18:46:14

问题


I want connect to hdfs (in localhost) and i have a error:

Call From despubuntu-ThinkPad-E420/127.0.1.1 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

I follow all the steps in other posts, but i dont solve my problem. I use hadoop 2.7 and this is configurations:

core-site.xml

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/despubuntu/hadoop/name/data</value>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
  </property>
</configuration>

hdfs-site.xml

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>

I type /usr/local/hadoop/bin/hdfs namenode -format and /usr/local/hadoop/sbin/start-all.sh

But when i type "jps" the result is:

10650 Jps
4162 Main
5255 NailgunRunner
20831 Launcher

I need help...


回答1:


Make sure that DFS which is set to port 9000 in core-site.xml is actually started. You can check with jps command. You can start it with sbin/start-dfs.sh




回答2:


I guess that you didn't set up your hadoop cluster correctly please follow these steps :

Step1: begin with setting up .bashrc:

vi $HOME/.bashrc

put the following lines at the end of the file: (change the hadoop home as yours)

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

step 2 : edit hadoop-env.sh as following:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

step 3 : Now create a directory and set the required ownerships and permissions

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp

step 4 : edit core-site.xml

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>

step 5 : edit mapred-site.xml

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
</property>

step 6 : edit hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)

 $ /usr/local/hadoop/bin/hadoop namenode -format

hope this will help you




回答3:


I got the same issue. You can see Name node, DataNode, Resource manager and Task manager daemons are running when you type. So just do start-all.sh then all daemons start running and now you can access HDFS.




回答4:


First check is if java processes are working or not by typing jps command on command line. On running jps command following processes are mandatory to run-->>

  • DataNode
  • jps
  • NameNode
  • SecondaryNameNode

If following processes are not running then first start the name node by using following command-->> start-dfs.sh

This worked out for me and removed the error you stated.




回答5:


I was getting similar error. Upon checking I found that my namenode service was in stopped state.

check status of the namenode sudo status hadoop-hdfs-namenode

if its not in started/running state

start namenode service sudo start hadoop-hdfs-namenode

Do keep in mind that it takes time before name node service becomes fully functional after restart. It reads all the hdfs edits in memory. You can check progress of this in /var/log/hadoop-hdfs/ using command tail -f /var/log/hadoop-hdfs/{Latest log file}



来源:https://stackoverflow.com/questions/29905388/hadoop-java-net-connectexception-connection-refused

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!