Spark: optimise writing a DataFrame to SQL Server

前端 未结 3 1409
[愿得一人]
[愿得一人] 2021-02-08 19:01

I am using the code below to write a DataFrame of 43 columns and about 2,000,000 rows into a table in SQL Server:

dataFrame
  .write
  .format(\"jdbc\")
  .mode(         


        
3条回答
  •  你的背包
    2021-02-08 19:49

    is converting data to CSV files and copying those CSV's is an option for you? we have automated this process for bigger tables and transferring those in GCP in CSV format. rather than reading this through JDBC.

提交回复
热议问题