问题
I have updated the /conf/slaves file on the Hadoop master node with the hostnames of my slave nodes, but I'm not able to start the slaves from the master. I have to individually start the slaves, and then my 5-node cluster is up and running. How can I start the whole cluster with a single command from the master node?
Also, SecondaryNameNode is running on all the slaves. Is that a problem? If so, how can I remove them from the slaves? I think there should only be one SecondaryNameNode in a cluster with one NameNode, am I right?
Thank you!
回答1:
In Apache Hadoop 3.0 use $HADOOP_HOME/etc/hadoop/workers
file to add slave nodes one per line.
来源:https://stackoverflow.com/questions/48910606/start-all-sh-and-start-dfs-sh-from-master-node-do-not-start-the-slave-node-s