Can I add arguments to python code when I submit spark job?

随声附和 提交于 2019-12-03 06:40:06

问题


I'm trying to use spark-submit to execute my python code in spark cluster.

Generally we run spark-submit with python code like below.

# Run a Python application on a cluster
./bin/spark-submit \
  --master spark://207.184.161.138:7077 \
  my_python_code.py \
  1000

But I wanna run my_python_code.pyby passing several arguments Is there smart way to pass arguments?


回答1:


Yes: Put this in a file called args.py

#import sys
print sys.argv

If you run

spark-submit args.py a b c d e 

You will see:

['/spark/args.py', 'a', 'b', 'c', 'd', 'e']



回答2:


Even though sys.argv is a good solution, I still prefer this more proper way of handling line command args in my PySpark jobs:

import argparse

parser = argparse.ArgumentParser()
parser.add_argument("--ngrams", help="some useful description.")
args = parser.parse_args()
if args.ngrams:
    ngrams = args.ngrams

This way, you can launch your job as follows:

spark-submit job.py --ngrams 3

More information about argparse module can be found in Argparse Tutorial




回答3:


Ah, it's possible. http://caen.github.io/hadoop/user-spark.html

spark-submit \
    --master yarn-client \   # Run this as a Hadoop job
    --queue <your_queue> \   # Run on your_queue
    --num-executors 10 \     # Run with a certain number of executors, for example 10
    --executor-memory 12g \  # Specify each executor's memory, for example 12GB
    --executor-cores 2 \     # Specify each executor's amount of CPUs, for example 2
    job.py ngrams/input ngrams/output



回答4:


You can pass the arguments from the spark-submit command and then access them in your code in the following way,

sys.argv[1] will get you the first argument, sys.argv[2] the second argument and so on. Refer to the below example,

You can create code as below to take the arguments which you will be passing in the spark-submit command,

import os
import sys

n = int(sys.argv[1])
a = 2
tables = []
for _ in range(n):
    tables.append(sys.argv[a])
    a += 1
print(tables)

Save the above file as PysparkArg.py and execute the below spark-submit command,

spark-submit PysparkArg.py 3 table1 table2 table3

Output:

['table1', 'table2', 'table3']

This piece of code can be used in PySpark jobs where it is required to fetch multiple tables from the database and, the number of tables to be fetched & the table names will be given by the user while executing the spark-submit command.



来源:https://stackoverflow.com/questions/32217160/can-i-add-arguments-to-python-code-when-i-submit-spark-job

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!