setting SparkContext for pyspark

后端 未结 3 1742
悲&欢浪女
悲&欢浪女 2021-02-05 11:17

I am newbie with spark and pyspark. I will appreciate if somebody explain what exactly does SparkContext parameter do? And how could I set

相关标签:
3条回答
  • 2021-02-05 11:46

    See here: the spark_context represents your interface to a running spark cluster manager. In other words, you will have already defined one or more running environments for spark (see the installation/initialization docs), detailing the nodes to run on etc. You start a spark_context object with a configuration which tells it which environment to use and, for example, the application name. All further interaction, such as loading data, happen as methods of the context object.

    For the simple examples and testing, you can run the spark cluster "locally", and skip much of the detail of what is above, e.g.,

    ./bin/pyspark --master local[4]
    

    will start an interpreter with a context already set to use four threads on your own CPU.

    In a standalone app, to be run with sparksubmit:

    from pyspark import SparkContext
    sc = SparkContext("local", "Simple App")
    
    0 讨论(0)
  • 2021-02-05 11:49

    The first thing a Spark program must do is to create a SparkContext object, which tells Spark how to access a cluster. To create a SparkContext you first need to build a SparkConf object that contains information about your application.

    If you are running pyspark i.e. shell then Spark automatically creates the SparkContext object for you with the name sc. But if you are writing your python program you have to do something like

    from pyspark import SparkContext
    sc = SparkContext(appName = "test")
    

    Any configuration would go into this spark context object like setting the executer memory or the number of core.

    These parameters can also be passed from the shell while invoking for example

    ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
    --master yarn-cluster \
    --num-executors 3 \
    --driver-memory 4g \
    --executor-memory 2g \
    --executor-cores 1
    lib/spark-examples*.jar \
    10
    

    For passing parameters to pyspark use something like this

    ./bin/pyspark --num-executors 17 --executor-cores 5 --executor-memory 8G
    
    0 讨论(0)
  • 2021-02-05 12:03

    The SparkContext object is the driver program. This object co-ordinates the processes over the cluster that you will be running your application on.

    When you run PySpark shell a default SparkContext object is automatically created with variable sc.

    If you create a standalone application you will need to initialize the SparkContext object in your script like below:

    sc = SparkContext("local", "My App")
    

    Where the first parameter is the URL to the cluster and the second parameter is the name of your app.

    I have written an article that goes through the basics of PySpark and Apache which you may find useful: https://programmathics.com/big-data/apache-spark/apache-installation-and-building-stand-alone-applications/

    DISCLAIMER: I am the creator of that website.

    0 讨论(0)
提交回复
热议问题