Why would I get a memory error with fast_executemany on a tiny df?

前端 未结 1 948
抹茶落季
抹茶落季 2021-02-14 08:34

I was looking for ways to speed up pushing a dataframe to sql server and stumbled upon an approach here. This approach blew me away in terms of speed. Using normal to_sql<

1条回答
  •  难免孤独
    2021-02-14 09:12

    I was able to reproduce your issue using pyodbc 4.0.23. The MemoryError was related to your use of the ancient

    DRIVER={SQL Server}
    

    Further testing using

    DRIVER=ODBC Driver 11 for SQL Server
    

    also failed, with

    Function sequence error (0) (SQLParamData)

    which was related to an existing pyodbc issue on GitHub. I posted my findings here.

    That issue is still under investigation. In the meantime you might be able to proceed by

    • using a newer ODBC driver like DRIVER=ODBC Driver 13 for SQL Server, and
    • running pip install pyodbc==4.0.22 to use an earlier version of pyodbc.

    0 讨论(0)
提交回复
热议问题