问题
Dask doesn't have a df.to_sql() like pandas and so I am trying to replicate the functionality and create an sql table using the map_partitions
method to do so. Here is my code:
import dask.dataframe as dd
import pandas as pd
import sqlalchemy_utils as sqla_utils
db_url = 'my_db_url_connection'
conn = sqla.create_engine(db_url)
ddf = dd.read_csv('data/prod.csv')
meta=dict(ddf.dtypes)
ddf.map_partitions(lambda df: df.to_sql('table_name', db_url, if_exists='append',index=True), ddf, meta=meta)
This returns my dask dataframe object, but when I go look into my psql server there's no new table... what is going wrong here?
UPDATE Still can't get it to work, but due to independent issue. Follow-up question: duplicate key value violates unique constraint - postgres error when trying to create sql table from dask dataframe
回答1:
Simply, you have created a dataframe which is a prescription of the work to be done, but you have not executed it. To execute, you need to call .compute()
on the result.
Note that the output here is not really a dataframe, each partition evaluates to None
(because to_sql
has no output), so it might be cleaner to express this with df.to_delayed
, something like
dto_sql = dask.delayed(pd.DataFrame.to_sql)
out = [dto_sql(d, 'table_name', db_url, if_exists='append', index=True)
for d in ddf.to_delayed()]
dask.compute(*out)
Also note, that whether you get good parallelism will depend on the database driver and the data system itself.
来源:https://stackoverflow.com/questions/54344793/create-sql-table-from-dask-dataframe-using-map-partitions-and-pd-df-to-sql