Using Dask's NEW to_sql for improved efficiency (memory/speed) or alternative to get data from dask dataframe into SQL Server Table
问题 My ultimate goal is to use SQL/Python together for a project with too much data for pandas to handle (at least on my machine). So, I have gone with dask to: read in data from multiple sources (mostly SQL Server Tables/Views) manipulate/merge the data into one large dask dataframe table of ~10 million+ rows and 52 columns, some of which have some long unique strings write it back to SQL Server on a daily basis, so that my PowerBI report can automatically refresh the data. For #1 and #2, they