Insert pandas dataframe created within Python into SQL Server

泄露秘密 提交于 2020-02-02 09:13:05

问题


As referenced, I've created a collection of data (40k rows, 5 columns) within Python that I'd like to insert back into a SQL Server table.

Typically, within SQL I'd make a 'select * into myTable from dataTable' call to do the insert, but the data sitting within a pandas dataframe obviously complicates this.

I'm not formally opposed to using SQLAlchemy (though would prefer to avoid another download and install), but would prefer to do this natively within Python, and am connecting to SSMS using pyodbc.

Is there a straightforward way to do this that avoids looping (ie, insert row by row)?


回答1:


As shown in this answer we can convert a DataFrame named df into a list of tuples by doing list(df.itertuples(index=False, name=None) so we can pass that to executemany without (explicitly) looping through each row.

crsr = cnxn.cursor()
crsr.fast_executemany = True
crsr.executemany(
    "INSERT INTO #tablename (col1, col2) VALUES (?, ?)",
    list(df.itertuples(index=False, name=None))
)
crsr.commit()

That is as "native" as you'll get, but it can lead to errors if the DataFrame contains pandas data types that are not recognized by pyodbc (which expects Python types as parameter values). You may still be better off using SQLAlchemy and pandas' to_sql method.



来源:https://stackoverflow.com/questions/53178858/insert-pandas-dataframe-created-within-python-into-sql-server

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!