How to control output files size in Spark Structured Streaming

。_饼干妹妹 提交于 2020-12-12 10:13:27

问题


We're considering using Spark Structured Streaming on a project. The input and output are parquet files on S3 bucket. Is it possible to control the size of the output files somehow? We're aiming at output files of size 10-100MB. As I understand, in traditional batch approach we could determine the output file sizes by adjusting the amount of partitions according to the size of the input dataset, is something similar possible in Structured Streaming?


回答1:


In Spark 2.2 or later the optimal option is to set spark.sql.files.maxRecordsPerFile

spark.conf.set("spark.sql.files.maxRecordsPerFile", n)

where n is tuned to reflect an average size of a row.

See

  • SPARK-18775 - Limit the max number of records written per file.
  • apache/spark@354e936187708a404c0349e3d8815a47953123ec


来源:https://stackoverflow.com/questions/54689677/how-to-control-output-files-size-in-spark-structured-streaming

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!