问题
In master, the $HADOOP_HOME
is /home/a/hadoop
, the slave's $HADOOP_HOME
is /home/b/hadoop
In master, when I try to using start-all.sh
, then the master name node start successfuly, but fails to start slave's data node with following message:
b@192.068.0.2: bash: line 0: cd: /home/b/hadoop/libexec/..: No such file or directory
b@192.068.0.2: bash: /home/b/hadoop/bin/hadoop-daemon.sh: No such file or directory
any idea on how to specify the $HADOOP_HOME
for slave in master configuration?
回答1:
I don't know of a way to configure different home directories for the various slaves from the master, but the Hadoop FAQ says that the Hadoop framework does not require ssh and that the DataNode and TaskTracker daemons can be started manually on each node.
I would suggest writing you own scripts to start things that take into account the specific environments of your nodes. However, make sure to include all the slaves in the master's slave file. It seems that this is necessary and that the heart beats are not enough for a master to add slaves.
来源:https://stackoverflow.com/questions/12207795/hadoop-master-cannot-start-slave-with-different-hadoop-home