问题
Kryo helps improve the performance of Spark applications by the efficient serialization approach.
I'm wondering, if Kryo will help in the case of SparkSQL, and how should I use it.
In SparkSQL applications, we'll do a lot of column based operations like df.select($"c1", $"c2")
, and the schema of DataFrame Row is not quite static.
Not sure how to register one or several serializer classes for the use case.
For example:
case class Info(name: String, address: String)
...
val df = spark.sparkContext.textFile(args(0))
.map(_.split(','))
.filter(_.length >= 2)
.map {e => Info(e(0), e(1))}
.toDF
df.select($"name") ... // followed by subsequent analysis
df.select($"address") ... // followed by subsequent analysis
I don't think it's a good idea to define case classes for each select
.
Or does it help if I register Info
like registerKryoClasses(Array(classOf[Info]))
回答1:
According to Spark's documentation, SparkSQL does not uses Kryo or Java serializations.
Datasets are similar to RDDs, however, instead of using Java serialization or Kryo they use a specialized Encoder to serialize the objects for processing or transmitting over the network. While both encoders and standard serialization are responsible for turning an object into bytes, encoders are code generated dynamically and use a format that allows Spark to perform many operations like filtering, sorting and hashing without deserializing the bytes back into an object.
They are much more lightweight than Java or Kryo, which is to be expected (it is a far more optimizable job to serialize, say a Row of 3 longs and two ints), than a class, its version description, its inner variables...) and having to instanciate it.
That being said, there is a way to use Kryo as an encoder implementation, see for example here : How to store custom objects in Dataset? . But this is meant as a solution to store custom objects (e.g. non product classes) in a Dataset, and not especially targeted at standard dataframes.
Without Kryo of Java serializers, creating encoders for custom, non product classes is somewhat limited (see the discussions on user defined types), for example, starting here : Does Apache spark 2.2 supports user-defined type (UDT)?
回答2:
You can set the serializer to kryo by setting the spark.serializer
property to org.apache.spark.serializer.KryoSerializer
either on your SparkConf or in a custom properties file you pass to the spark-submit command via the --properties-file flag.
When you configure the Kryo serializer Spark will transparently use Kryo when transmitting data between nodes. So you're Spark SQL statements should automatically inherit the performance benefit.
来源:https://stackoverflow.com/questions/49270880/does-kryo-help-in-sparksql