Hadoop - java.net.ConnectException: Connection refused

旧巷老猫 提交于 2019-12-04 04:45:30
nikk

Make sure that DFS which is set to port 9000 in core-site.xml is actually started. You can check with jps command. You can start it with sbin/start-dfs.sh

I guess that you didn't set up your hadoop cluster correctly please follow these steps :

Step1: begin with setting up .bashrc:

vi $HOME/.bashrc

put the following lines at the end of the file: (change the hadoop home as yours)

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

step 2 : edit hadoop-env.sh as following:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

step 3 : Now create a directory and set the required ownerships and permissions

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp

step 4 : edit core-site.xml

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>

step 5 : edit mapred-site.xml

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
</property>

step 6 : edit hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)

 $ /usr/local/hadoop/bin/hadoop namenode -format

hope this will help you

I got the same issue. You can see Name node, DataNode, Resource manager and Task manager daemons are running when you type. So just do start-all.sh then all daemons start running and now you can access HDFS.

First check is if java processes are working or not by typing jps command on command line. On running jps command following processes are mandatory to run-->>

  • DataNode
  • jps
  • NameNode
  • SecondaryNameNode

If following processes are not running then first start the name node by using following command-->> start-dfs.sh

This worked out for me and removed the error you stated.

I was getting similar error. Upon checking I found that my namenode service was in stopped state.

check status of the namenode sudo status hadoop-hdfs-namenode

if its not in started/running state

start namenode service sudo start hadoop-hdfs-namenode

Do keep in mind that it takes time before name node service becomes fully functional after restart. It reads all the hdfs edits in memory. You can check progress of this in /var/log/hadoop-hdfs/ using command tail -f /var/log/hadoop-hdfs/{Latest log file}

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!