Partition Spark DataFrame by column value and write paritioned data into MULTIPLE S3 buckets in parallel

前端 未结 0 1784
你的背包
你的背包 2021-01-31 17:51

Let\'s say I have Spark dataframe in the following shape:

Customers(id: Long, fname: String, lname: String)

[1, Fname1, Lname1]
[2, Fname2, Lname2]
[3, Fname3, L         


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题