Spark: How can DataFrame be Dataset[Row] if DataFrame's have a schema

烈酒焚心 提交于 2020-04-29 21:56:00

问题


This article claims that a DataFrame in Spark is equivalent to a Dataset[Row], but this blog post shows that a DataFrame has a schema.

Take the example in the blog post of converting an RDD to a DataFrame: if DataFrame were the same thing as Dataset[Row], then converting an RDD to a DataFrameshould be as simple

val rddToDF = rdd.map(value => Row(value))

But instead it shows that it's this

val rddStringToRowRDD = rdd.map(value => Row(value))
val dfschema = StructType(Array(StructField("value",StringType)))
val rddToDF = sparkSession.createDataFrame(rddStringToRowRDD,dfschema)
val rDDToDataSet = rddToDF.as[String]

Clearly a dataframe is actually a dataset of rows and a schema.


回答1:


In Spark 2.0, in code there is: type DataFrame = Dataset[Row]

It is Dataset[Row], just because of definition.

Dataset has also schema, you can print it using printSchema() function. Normally Spark infers schema, so you don't have to write it by yourself - however it's still there ;)

You can also do createTempView(name) and use it in SQL queries, just like DataFrames.

In other words, Dataset = DataFrame from Spark 1.5 + encoder, that converts rows to your classes. After merging types in Spark 2.0, DataFrame becomes just an alias for Dataset[Row], so without specified encoder.

About conversions: rdd.map() also returns RDD, it never returns DataFrame. You can do:

// Dataset[Row]=DataFrame, without encoder
val rddToDF = sparkSession.createDataFrame(rdd)
// And now it has information, that encoder for String should be used - so it becomes Dataset[String]
val rDDToDataSet = rddToDF.as[String]

// however, it can be shortened to:
val dataset = sparkSession.createDataset(rdd)



回答2:


Note (in addition to the answer of T Gaweda) that there is a schema associated to each Row (Row.schema). However, this schema is not set until it is integrated in a DataFrame (or Dataset[Row])

scala> Row(1).schema
res12: org.apache.spark.sql.types.StructType = null

scala> val rdd = sc.parallelize(List(Row(1)))
rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = ParallelCollectionRDD[5] at parallelize at <console>:28
scala> spark.createDataFrame(rdd,schema).first
res15: org.apache.spark.sql.Row = [1]
scala> spark.createDataFrame(rdd,schema).first.schema
res16: org.apache.spark.sql.types.StructType = StructType(StructField(a,IntegerType,true))


来源:https://stackoverflow.com/questions/39915086/spark-how-can-dataframe-be-datasetrow-if-dataframes-have-a-schema

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!