Do you benefit from the Kryo serializer when you use Pyspark?

[亡魂溺海] 提交于 2019-12-12 08:27:50

问题


I read that the Kryo serializer can provide faster serialization when used in Apache Spark. However, I'm using Spark through Python.

Do I still get notable benefits from switching to the Kryo serializer?


回答1:


Kryo won’t make a major impact on PySpark because it just stores data as byte[] objects, which are fast to serialize even with Java.

But it may be worth a try — you would just set the spark.serializer configuration and trying not to register any classe.

What might make more impact is storing your data as MEMORY_ONLY_SER and enabling spark.rdd.compress, which will compress them your data.

In Java this can add some CPU overhead, but Python runs quite a bit slower, so it might not matter. It might also speed up computation by reducing GC or letting you cache more data.

Reference : Matei Zaharia's answer in the mailing list.




回答2:


It all depends on what you mean when you say PySpark. In the last two years, PySpark development, same as the Spark development in general, shifted from the low level RDD API towards high level APIs like DataFrame or ML.

These APIs are natively implemented on JVM and the Python code is mostly limited to a bunch of RPC calls executed on the driver. Everything else is pretty much the same code as executed using Scala or Java so it should benefit from Kryo in the same way as the native applications.

I will argue that at the end of the day there is not much to lose when you use Kryo with PySpark and potentially something to gain when your application depends heavily on the "native" APIs.



来源:https://stackoverflow.com/questions/36278574/do-you-benefit-from-the-kryo-serializer-when-you-use-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!