Hadoop cluster setup - java.net.ConnectException: Connection refused

后端 未结 15 728
心在旅途
心在旅途 2020-11-29 02:31

I want to setup a hadoop-cluster in pseudo-distributed mode. I managed to perform all the setup-steps, including startuping a Namenode, Datanode, Jobtracker and a Tasktracke

相关标签:
15条回答
  • 2020-11-29 03:13

    In my experaince

    15/02/22 18:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop
    library for your platform... using builtin-java classes where applicable
    

    You may have 64 bit version OS, and hadoop installation 32bit. refer this

    java.net.ConnectException: Call From marta-komputer/127.0.1.1 to
    localhost:9000 failed on connection exception: java.net.ConnectException: 
    connection refused; For more details see:   
    http://wiki.apache.org/hadoop/ConnectionRefused
    

    this problem refers to your ssh public key authorization. please provide details about your ssh set up.

    Please refer this link to check the complete steps.

    also provide info if

    cat $HOME/.ssh/authorized_keys
    

    returns any result or not.

    0 讨论(0)
  • 2020-11-29 03:15

    From the netstat output you can see the process is listening on address 127.0.0.1

    tcp        0      0 127.0.0.1:9000          0.0.0.0:*  ...
    

    from the exception message you can see that it tries to connect to address 127.0.1.1

    java.net.ConnectException: Call From marta-komputer/127.0.1.1 to localhost:9000 failed ...
    

    further in the exception it's mentionend

    For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    

    on this page you find

    Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this)

    so the conclusion is to remove this line in your /etc/hosts

    127.0.1.1       marta-komputer
    
    0 讨论(0)
  • 2020-11-29 03:20

    I had the similar prolem with OP. As the terminal output suggested, I went to http://wiki.apache.org/hadoop/ConnectionRefused

    I tried to change my /etc/hosts file as suggested here, i.e. remove 127.0.1.1 as OP suggested it will create another error.

    So in the end, I leave it as is. The following is my /etc/hosts

    127.0.0.1       localhost.localdomain   localhost
    127.0.1.1       linux
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    

    In the end, I found that my namenode did not started correctly, i.e. When you type sudo netstat -lpten | grep java in the terminal, there will not be any JVM process running(listening) on port 9000.

    So I made two directories for namenode and datanode respectively(if you have not done so). You don't have to put where I put it, please replace it based on your hadoop directory. i.e.

    mkdir -p /home/hadoopuser/hadoop-2.6.2/hdfs/namenode
    mkdir -p /home/hadoopuser/hadoop-2.6.2/hdfs/datanode
    

    I reconfigured my hdfs-site.xml.

    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
       <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/home/hadoopuser/hadoop-2.6.2/hdfs/namenode</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/home/hadoopuser/hadoop-2.6.2/hdfs/datanode</value>
        </property>
    </configuration>
    

    In terminal, stop your hdfs and yarn with script stop-dfs.sh and stop-yarn.sh. They are located in your hadoop directory/sbin. In my case, it's /home/hadoopuser/hadoop-2.6.2/sbin/.

    Then start your hdfs and yarn with script start-dfs.sh and start-yarn.sh After it is started, type jps in your terminal to see if your JVM processes are running correctly. It should show the following.

    15678 NodeManager
    14982 NameNode
    15347 SecondaryNameNode
    23814 Jps
    15119 DataNode
    15548 ResourceManager
    

    Then try to use netstat again to see if your namenode is listening to port 9000

    sudo netstat -lpten | grep java
    

    If you successfully set up the namenode, you should see the following in your terminal output.

    tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1001 175157 14982/java

    Then try to type the command hdfs dfs -mkdir /user/hadoopuser If this command executes sucessfully, now you can list your directory in the HDFS user directory by hdfs dfs -ls /user

    0 讨论(0)
  • 2020-11-29 03:20

    Make sure HDFS is online. Start it by $HADOOP_HOME/sbin/start-dfs.sh Once you do that, your test with telnet localhost 9001should work.

    0 讨论(0)
  • 2020-11-29 03:21

    In /etc/hosts:

    1. Add this line:

    your-ip-address your-host-name

    example: 192.168.1.8 master

    In /etc/hosts:

    1. Delete the line with 127.0.1.1 (This will cause loopback)

    2. In your core-site, change localhost to your-ip or your-hostname

    Now, restart the cluster.

    0 讨论(0)
  • 2020-11-29 03:22

    Check your firewall setting and set

      <property>
      <name>fs.default.name</name>
      <value>hdfs://MachineName:9000</value>
      </property>
    

    replace localhost to machine name

    0 讨论(0)
提交回复
热议问题