问题
I'm trying to set up hadoop on my amazon instances, on a 2 node cluster. Each instance has a public dns, which I use reference them. So in the /etc/hosts files on both machines I append lines like this:
{public dns of 1st instance} node1
{public dns of 2st instance} node2
I'm also able to ssh into each instance from the other by simply doing:
ssh {public dns of the other instance}
In the the hadoop/conf/slaves on the first instance file I have:
localhost
node2
When I start the script bin/start-dfs.sh It's able to start the namenode, datanode, and secondary namenode on the master, but it says:
node2: ssh: Could not resolve hostname node2: Name or service not known
The same it printed out if I try:
ssh node2
I guess the question is how do I tell it to associate node2 with the public dns of the second instance. Is it not enough to append the
{public dns of 2st instance} node2
line to the /etc/hosts file? Do I have to reboot the instances?
回答1:
/etc/hosts
kind of act like a local DNS, when you don't have real DNS associated with an IP address.
Do you really need a {public dns of 1st instance} node1
mapping if you can use {public dns of 1st instance} directly in the slave and master files?
Moreover, it's better to use the private IP addresses of amazon instances instead of using the public IP addresses. You can do a ifconfig
in the terminal of each instances and determine their private IP addresses if any. They will probably basically will start with 10.x.x.x/172.x.x.x/192.x.x.x? You can probably then map those instead in /etc/hosts in each of the amazon instances.
So, your /etc/hosts in each machine should look something like -
Machine-1:
{IP_address_1st_instance} node1
{IP_address_2st_instance} node2
Machine-2:
{IP_address_1st_instance} node1
{IP_address_2st_instance} node2
And, this is so that the Amazon instances(machines) can resolve each other, if you are anyhow planning to map them.
来源:https://stackoverflow.com/questions/18134231/ssh-could-not-resolve-hostname-name-or-service-not-known