spark.sql vs SqlContext

后端 未结 4 2000
盖世英雄少女心
盖世英雄少女心 2021-01-20 16:04

I have used SQL in Spark, in this example:

results = spark.sql(\"select * from ventas\")

where ventas is a dataframe, previosuly cataloged

相关标签:
4条回答
  • 2021-01-20 16:50
    • Next create the df1 as javaobject

      df1=sqlcontext.sql("select col1,col2,col3 from table")
      
    • Next create df2 as DATAFRAME

      df2=spark.sql("select col1,col2,col3 from table")
      

    Check the difference using type(df2) and type(df1)

    0 讨论(0)
  • 2021-01-20 16:58

    Before Spark 2.x SQLContext was build with help of SparkContext but after Spark 2.x SparkSession was introduced which have the functionality of HiveContext and SQLContect both.So no need of creating SQLContext separatly.

       **before Spark2.x**
       sCont = SparkContext()
       sqlCont = SQLContext(sCont)
    
       **after Spark 2.x:** 
       spark = SparkSession()
    
    
    0 讨论(0)
  • 2021-01-20 16:59

    From a user's perspective (not a contributor), I can only rehash what the developer's provided in the upgrade notes:

    Upgrading From Spark SQL 1.6 to 2.0

    • SparkSession is now the new entry point of Spark that replaces the old SQLContext and HiveContext. Note that the old SQLContext and HiveContext are kept for backward compatibility. A new catalog interface is accessible from SparkSession - existing API on databases and tables access such as listTables, createExternalTable, dropTempView, cacheTable are moved here.

    Before 2.0, the SqlContext needed an extra call to the factory that creates it. With SparkSession, they made things a lot more convenient.

    If you take a look at the source code, you'll notice that the SqlContext class is mostly marked @deprecated. Closer inspection shows that the most commonly used methods simply call sparkSession.

    For more info, take a look at the developer notes, Jira issues, conference talks on spark 2.0, and Databricks blog.

    0 讨论(0)
  • 2021-01-20 17:00

    Sparksession is the preferred way of working with Spark object now. Both Hivecontext and SQLContext are available as a part of this single object SparkSession.

    You are using the latest syntax by creating a view df.createOrReplaceTempView('ventas').

    0 讨论(0)
提交回复
热议问题