pyspark structured streaming write to parquet in batches
问题 I am doing some transformation on the spark structured streaming dataframe. I am storing the transformed dataframe as parquet files in hdfs. Now I want that the write to hdfs should happen in batches instead of transforming the whole dataframe first and then storing the dataframe. 回答1: Here is a parquet sink example: # parquet sink example targetParquetHDFS = sourceTopicKAFKA .writeStream .format("parquet") # can be "orc", "json", "csv", etc. .outputMode("append") # can only be "append"