How to use azure-sqldb-spark connector in pyspark
问题 I want to write around 10 GB of data everyday to Azure SQL server DB using PySpark.Currently using JDBC driver which takes hours making insert statements one by one. I am planning to use azure-sqldb-spark connector which claims to turbo boost the write using bulk insert. I went through the official doc: https://github.com/Azure/azure-sqldb-spark. The library is written in scala and basically requires the use of 2 scala classes : import com.microsoft.azure.sqldb.spark.config.Config import com