I have used SQL in Spark, in this example:
results = spark.sql(\"select * from ventas\")
where ventas is a dataframe, previosuly cataloged
Before Spark 2.x SQLContext was build with help of SparkContext but after Spark 2.x SparkSession was introduced which have the functionality of HiveContext and SQLContect both.So no need of creating SQLContext separatly.
**before Spark2.x**
sCont = SparkContext()
sqlCont = SQLContext(sCont)
**after Spark 2.x:**
spark = SparkSession()