Spark submit does automatically upload the jar to cluster?

后端 未结 3 1635
北荒
北荒 2020-12-13 11:26

I\'m trying to submit a Spark app from local machine Terminal to my Cluster. I\'m using --master yarn-cluster. I need to run the driver program on my Cluste

相关标签:
3条回答
  • 2020-12-13 11:32

    Try adding --jars option before your /path/to/jar/file

    spark-submit --jars /tmp/test.jar

    0 讨论(0)
  • 2020-12-13 11:46

    Yes and no. It depends on what you mean. Spark deploys the .jar to the nodes in the cluster. However, it won't upload your .jar file from your local machine to the cluster.

    You can find more info in the Submitting Applications page. As you can see, in the arguments you pass to spark-submit, there is one that needs to be globally visible: the application-jar.

    application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.

    As far as I understand, what you want is to use yarn-client, not yarn-cluster. This will run the driver in the client (e.g., the machine which you are trying to call spark-submit on, for example your laptop), without the need of copying the .jar file on the cluster. More about this here.

    0 讨论(0)
  • 2020-12-13 11:48

    I see you are quoting the spark-submit page from Spark Docs but I would spend a lot more time on the Running Spark on YARN page. Bottom-line, look at:

    There are two deploy modes that can be used to launch Spark applications on YARN. In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

    Further you note, "I need to run the driver program on my Cluster too, not on the machine I do submit the application i.e my local machine"

    So I agree with you that you are right to run --master yarn-cluster instead of --master yarn-client

    (and one comment notes what might just be a syntax error where you dropped "assembly.jar" but I think this will apply as well...)

    Some of the basic assumptions about non-YARN implementations change a lot when YARN is introduced, mostly related to Classpaths and the need to push jars to the workers.

    From an email on the Apache Spark User list:

    YARN cluster mode. Spark submit does upload your jars to the cluster. In particular, it puts the jars in HDFS so your driver can just read from there. As in other deployments, the executors pull the jars from the driver.

    So finally, from the Apache Spark YARN doc:

    Ensure that HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which contains the (client side) configuration files for the Hadoop cluster. These configs are used to write to HDFS and connect to the YARN ResourceManager.


    NOTE: I only see you adding a single JAR, if there's a need to add other JARs there's a special note about doing that with YARN:

    In yarn-cluster mode, the driver runs on a different machine than the client, so SparkContext.addJar won’t work out of the box with files that are local to the client. To make files on the client available to SparkContext.addJar, include them with the --jars option in the launch command.

    That page in the link has some examples.


    And of course you downloaded or built the YARN-specific version of Spark.


    Background, in a standalone cluster deployment using spark-submit and the option --deploy-mode cluster, yes you do need to make sure every worker node has access to all the dependencies, Spark will not push them to the cluster. This is because in "standalone cluster" mode with Spark as the job manager, you don't know which node the driver will run on! But that doesn't apply to your case.

    But if I could, depending on the size of the jars you are uploading, I would still explicitly put the jars on each node, or "globally available" via HDFS, for another reason from the docs:

    From Advanced Dependency Management, it seems to present the best of both worlds, but also a great reason for manually pushing your jars out to all nodes:

    local: - a URI starting with local:/ is expected to exist as a local file on each worker node. This means that no network IO will be incurred, and works well for large files/JARs that are pushed to each worker, or shared via NFS, GlusterFS, etc.

    But I assume that local:/... would change to hdfs:/ ... not sure on that one.

    0 讨论(0)
提交回复
热议问题