Hadoop Client Node Configuration

前端 未结 3 1389
终归单人心
终归单人心 2020-12-09 05:06

Assume that there is a Hadoop Cluster that has 20 machines. Out of those 20 machines 18 machines are slaves and machine 19 is for NameNode and machine 20 is for JobTracker.<

相关标签:
3条回答
  • 2020-12-09 05:21

    Typically in case you have a multi tenant cluster (which most hadoop clusters are bound to be) then ideally no one other than administrators have access to the machines that are the part of the cluster.

    Developers setup their own "edge-nodes". Edge Nodes basically have hadoop libraries and have the client configuration deployed to them (various xml files which tell the local installation where namenode, job tracker, zookeeper etc are core-site, mapred-site, hdfs-site.xml). But the edge node does not have any role as such in the cluster i.e. no persistent hadoop services are running on this node.

    Now in case of a small development environment kind of setup you can use any one of the participating nodes of the cluster to run jobs or run shell commands.

    So based on your requirement the definition and placement of client varies.

    0 讨论(0)
  • 2020-12-09 05:31

    I am new to hadoop, so from what I understood:

    If your data upload is not an actual service of the cluster, which should be running on an edge node of the cluster, then you can configure your own computer to work as an edge node.

    An edge node doesn't need to be known by the cluster (but for security stuff) as it does not store data nor compute job. This is basically what it means to be an edge-node: it is connected to the hadoop cluster but does not participate.

    In case it can help someone, here is what I have done to connect to a cluster that I don't administer:

    • get an account on the cluster, say myaccount
    • create an account on you computer with the same name: myaccount
    • configure your computer to access the cluster machines (ssh w\out passphrase, registered ip, ...)
    • get the hadoop configuration files from an edge-node of the cluster
    • get a hadoop distrib (eg. from here)
    • uncompress it where you want, say /home/myaccount/hadoop-x.x
    • add the following environment variables: JAVA_HOME, HADOOP_HOME (/home/me/hadoop-x.x)
    • (if you'd like) add hadoop bin to your path: export PATH=$HADOOP_HOME/bin:$PATH
    • replace your hadoop configuration files by those you got from the edge node. With hadoop 2.5.2, it is the folder $HADOOP_HOME/etc/hadoop
    • also, I had to change the value of a couple $JAVA_HOME defined in conf files. To find them use: grep -r "export.*JAVA_HOME"

    Then do hadoop fs -ls / which should list the root directory of the cluster hdfs.

    0 讨论(0)
  • 2020-12-09 05:32

    I recommend this article. "Client machines have Hadoop installed with all the cluster settings, but are neither a Master or a Slave. Instead, the role of the Client machine is to load data into the cluster, submit Map Reduce jobs describing how that data should be processed, and then retrieve or view the results of the job when its finished."

    0 讨论(0)
提交回复
热议问题