Write single CSV file using spark-csv

前端 未结 13 1846
心在旅途
心在旅途 2020-11-22 08:43

I am using https://github.com/databricks/spark-csv , I am trying to write a single CSV, but not able to, it is making a folder.

Need a Scala function which will take

相关标签:
13条回答
  • 2020-11-22 09:27

    It is creating a folder with multiple files, because each partition is saved individually. If you need a single output file (still in a folder) you can repartition (preferred if upstream data is large, but requires a shuffle):

    df
       .repartition(1)
       .write.format("com.databricks.spark.csv")
       .option("header", "true")
       .save("mydata.csv")
    

    or coalesce:

    df
       .coalesce(1)
       .write.format("com.databricks.spark.csv")
       .option("header", "true")
       .save("mydata.csv")
    

    data frame before saving:

    All data will be written to mydata.csv/part-00000. Before you use this option be sure you understand what is going on and what is the cost of transferring all data to a single worker. If you use distributed file system with replication, data will be transfered multiple times - first fetched to a single worker and subsequently distributed over storage nodes.

    Alternatively you can leave your code as it is and use general purpose tools like cat or HDFS getmerge to simply merge all the parts afterwards.

    0 讨论(0)
提交回复
热议问题