可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I've set up a distributed Hadoop environment within VirtualBox: 4 virtual Ubuntu 11.10 installations, one acting as the master node, the other three as slaves. I followed this tutorial to get the single-node version up and running and then converted to the fully-distributed version. It was working just fine when I was running 11.04; however, when I upgraded to 11.10, it broke. Now all my slaves' logs show the following error message, repeated ad nauseum:
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.10:54310. Already tried 0 time(s). INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.10:54310. Already tried 1 time(s). INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.10:54310. Already tried 2 time(s).
And so on. I've found other instances of this error message on the Internet (and StackOverflow) but none of the solutions have worked (tried changing the core-site.xml and mapred-site.xml entries to be the IP address rather than hostname; quadruple-checked /etc/hosts
on all slaves and master; master can SSH password-less into all slaves). I even tried reverting each slave back to a single-node setup, and they would all work fine in this case (on that note, the master always works fine as both a Datanode and the Namenode).
The only symptom I've found that would seem to give a lead is that from any of the slaves, when I attempt a telnet 192.168.1.10 54310
, I get Connection refused
, suggesting there is some rule blocking access (which must have gone into effect when I upgraded to 11.10).
My /etc/hosts.allow
has not changed, however. I tried the rule ALL: 192.168.1.
, but it did not change the behavior.
Oh yes, and netstat
on the master clearly shows tcp ports 54310 and 54311 are listening.
Anyone have any suggestions to get the slave Datanodes to recognize the Namenode?
EDIT #1: In doing some poking around with nmap (see comments on this post), I'm thinking the issue is in my /etc/hosts
files. This is what is listed for the master VM:
127.0.0.1 localhost 127.0.1.1 master 192.168.1.10 master 192.168.1.11 slave1 192.168.1.12 slave2 192.168.1.13 slave3
For each slave VM:
127.0.0.1 localhost 127.0.1.1 slaveX 192.168.1.10 master 192.168.1.1X slaveX
Unfortunately, I'm not sure what I changed, but the NameNode is now always dying with the exception of trying to bind a port "that's already in use" (127.0.1.1:54310). I'm clearly doing something wrong with the hostnames and IP addresses, but I'm really not sure what it is. Thoughts?
回答1:
I found it! By commenting out the second line of the /etc/hosts
file (the one with the 127.0.1.1
entry), netstat
shows the NameNode ports binding to the 192.168.1.10
address instead of the local one, and the slave VMs found it. Ahhhhhhhh. Mystery solved! Thanks for everyone's help.
回答2:
This solution worked for me. i.e make sure that the name you used in property in core-site.xml and mapred-site.xml :
fs.default.namehdfs://master:54310true
i.e. master is defined in /etc/hosts as xyz.xyz.xyz.xyz master on BOTH master and slave nodes. Then restart the namenode and check using netstat -tuplen
and to see that it is bound to the "external" IP address
tcp 0 xyz.xyz.xyz.xyz:54310 0.0.0.0:* LISTEN 102 107203 -
and NOT local IP 192.168.x.y or 127.0.x.y
回答3:
I had the same trouble. @Magsol solution worked but it should be noted that the entry that needs to be commented out is
127.0.1.1 masterxyz
on the master machine, not the 127.0.1.1 on the slave, though I did that too. Also you need to stop-all.sh and start-all.sh for hadoop, probably obvious.
Once you have restarted hadoop check the nodemaster here: http://masterxyz:50030/jobtracker.jsp
and look at the number of nodes available for jobs.
回答4:
Though this response is not the solution the author is looking for, other users might land on this page thinking otherwise, so if you are using AWS for setting up your cluster, it is likely that ICMP security rules haven't been enabled in AWS Security Groups page. Look at the following: Pinging EC2 instances
The above solved the connectivity issue from data nodes to master nodes. Ensure that you can ping between each instance.
回答5:
I also faced similar issue. (I am using ubuntu 17.0) I kept only the entries of master and slaves in /etc/hosts
file. (in both master and slave machines)
127.0.0.1 localhost 192.168.201.101 master 192.168.201.102 slave1 192.168.201.103 slave2
secondly, > sudo gedit /etc/hosts.allow
and add the entry : ALL:192.168.201.
thirdly, disabled the firewall using sudo ufw disable
finally, I deleted both namenode and datanode folders from all the nodes in cluster, and rerun
$HADOOP_HOME/bin> hdfs namenode -format -force $HADOOP_HOME/sbin> ./start-dfs.sh $HADOOP_HOME/sbin> ./start-yarn.sh
To check the health report from command line (which I would recommend)
$HADOOP_HOME/bin> hdfs dfsadmin -report
and I got all the nodes working correctly.
回答6:
I am running a 2-nodes cluster.
192.168.0.24 master
192.168.0.26 worker2
I was facing the same problem of Retrying connect to server: master/192.168.0.24:54310 in my worker2 machine logs. But the people mentioned above encountered errors running this command - telnet 192.168.0.24 54310. However, in my case the telnet command worked fine. Then I checked my /etc/hosts file
master /etc/hosts
127.0.0.1 localhost
192.168.0.24 ubuntu
192.168.0.24 master
192.168.0.26 worker2
worker2 /etc/hosts
127.0.0.1 localhost
192.168.0.26 ubuntu
192.168.0.24 master
192.168.0.26 worker2
When I hit http://localhost:50070 on master, I saw Live nodes : 2. But when I clicked on it, I saw only one datanode which was of master's. I checked jps both on master and worker2. Datanode process was running on both the machines.
Then after several trial and errors, I realized that my master and worker2 machines had the same host name "ubuntu". I changed the worker2's hostname from "ubuntu" to "worker2" and removed the "ubuntu" entry from the worker2 machine.
Note: To change the hostname edit the /etc/hostname with sudo.
Bingo! It worked :) I was able to see two datanodes on the dfshealth UI page ( locahost:50070)