Using Spark 2.0, Im seeing that it is possible to turn a dataframe of row\'s into a dataframe of case classes. When I try to do so, Im greeted with a message stating to impo
Spark used spark
identifier for SparkSession. This is what causes the confusion. If you created it with something like,
val ss = SparkSession
.builder()
.appName("test")
.master("local[2]")
.getOrCreate()
The correct way to import implicits
would be,
import ss.implicits._
Let me know if this helps. Cheers.
There is no package called spark.implicits
.
With spark
here it refers to SparkSession. If you are inside the REPL the session is already defined as spark
so you can just type:
import spark.implicits._
If you have defined your own SparkSession
somewhere in your code, then adjust it accordingly:
val mySpark = SparkSession
.builder()
.appName("Spark SQL basic example")
.config("spark.some.config.option", "some-value")
.getOrCreate()
// For implicit conversions like converting RDDs to DataFrames
import mySpark.implicits._