pyspark program throwing name 'spark' is not defined

半城伤御伤魂 提交于 2019-12-23 15:59:13

问题


Below program throwing error name 'spark' is not defined

Traceback (most recent call last):  
File "pgm_latest.py", line 232, in <module>
    sconf =SparkConf().set(spark.dynamicAllocation.enabled,true)       
        .set(spark.dynamicAllocation.maxExecutors,300)        
        .set(spark.shuffle.service.enabled,true)       
        .set(spark.shuffle.spill.compress,true)
NameError: name 'spark' is not defined 
spark-submit --driver-memory 12g --master yarn-cluster --executor-memory 6g --executor-cores 3 pgm_latest.py

Code

#!/usr/bin/python
import sys
import os
from datetime import *
from time import *
from pyspark.sql import *
from pyspark
import SparkContext
from pyspark import SparkConf

sc = SparkContext()
sqlCtx= HiveContext(sc)

sqlCtx.sql('SET spark.sql.autoBroadcastJoinThreshold=104857600')
sqlCtx.sql('SET Tungsten=true')
sqlCtx.sql('SET spark.sql.shuffle.partitions=500')
sqlCtx.sql('SET spark.sql.inMemoryColumnarStorage.compressed=true')
sqlCtx.sql('SET spark.sql.inMemoryColumnarStorage.batchSize=12000')
sqlCtx.sql('SET spark.sql.parquet.cacheMetadata=true')
sqlCtx.sql('SET spark.sql.parquet.filterPushdown=true')
sqlCtx.sql('SET spark.sql.hive.convertMetastoreParquet=true')
sqlCtx.sql('SET spark.sql.parquet.binaryAsString=true')
sqlCtx.sql('SET spark.sql.parquet.compression.codec=snappy')
sqlCtx.sql('SET spark.sql.hive.convertMetastoreParquet=true')

## Main functionality
def main(sc):

    if name == 'main':

        # Configure OPTIONS
        sconf =SparkConf() \
            .set("spark.dynamicAllocation.enabled","true")\
            .set("spark.dynamicAllocation.maxExecutors",300)\
            .set("spark.shuffle.service.enabled","true")\
            .set("spark.shuffle.spill.compress","true")

sc =SparkContext(conf=sconf)

# Execute Main functionality

main(sc)
sc.stop()

回答1:


I think you are using old spark version than 2.x.

instead of this

spark.createDataFrame(..)

use below

> df = sqlContext.createDataFrame(...)



回答2:


For example, if you know where spark is installed. eg:

/home/user/spark/spark-2.4.0-bin-hadoop2.7/
├── LICENSE
├── NOTICE
├── R
├── README.md
├── RELEASE
├── bin
├── conf
├── data
├── examples
├── jars
├── kubernetes
├── licenses
├── python
├── sbin
└── yarn

You can explicitly specify the path to the spark installation inside the .init method

#pyspark
findspark.init("/home/user/spark/spark-2.4.0-bin-hadoop2.7/")



回答3:


FindSpark module will come handy here.

Install the module with the following:

python -m pip install findspark

Make sure SPARK_HOME environment variable is set.

Usage:

import findspark
findspark.init()
import pyspark # Call this only after findspark
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession(sc)

print(spark)


来源:https://stackoverflow.com/questions/38403113/pyspark-program-throwing-name-spark-is-not-defined

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!