问题
I am downloading Json data from an api website and using sqlalchemy, pyodbc and pandas' to_sql function to insert that data into a MSSQL server.
I can download up to 10000 rows, however I have to limit the chunksize to 10 otherwise I get the following error:
DBAPIError: (pyodbc.Error) ('07002', '[07002] [Microsoft][SQL Server Native Client 11.0]COUNT field incorrect or syntax error (0) (SQLExecDirectW)') [SQL: 'INSERT INTO [TEMP_producing_entity_details]
There are around 500 Million rows to download, it's just crawling at this speed. Any advice on a workaround?
Thanks,
回答1:
UPDATE:
pandas 0.23.1 has reverted the problematic changes introduced in 0.23.0. However, the best solution for raw performance remains the CSV -> bcp
approach as described below.
UPDATE:
pandas 0.24.0 apparently has re-introduced the issue (ref: here)
(Original answer)
Prior to pandas version 0.23.0, to_sql
would generate a separate INSERT for each row in the DataTable:
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
0,N'row000'
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
1,N'row001'
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
2,N'row002'
Presumably to improve performance, pandas 0.23.0 now generates a table-value constructor to insert multiple rows per call
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6),@P3 int,@P4 nvarchar(6),@P5 int,@P6 nvarchar(6)',
N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2), (@P3, @P4), (@P5, @P6)',
0,N'row000',1,N'row001',2,N'row002'
The problem is that SQL Server stored procedures (including system stored procedures like sp_prepexec
) are limited to 2100 parameters, so if the DataFrame has 100 columns then to_sql
can only insert about 20 rows at a time.
We can calculate the required chunksize
using
# df is an existing DataFrame
#
# limit based on sp_prepexec parameter count
tsql_chunksize = 2097 // len(df.columns)
# cap at 1000 (limit for number of rows inserted by table-value constructor)
tsql_chunksize = 1000 if tsql_chunksize > 1000 else tsql_chunksize
#
df.to_sql('tablename', engine, if_exists='replace', index=False, chunksize=tsql_chunksize)
However, the fastest approach is still likely to be:
dump the DataFrame to a CSV file (or similar), and then
have Python call the SQL Server
bcp
utility to upload that file into the table.
回答2:
Made a few modifications based on Gord Thompson's answer. This will auto-calculate the chunksize and keep it to the lowest nearest integer value which fits in the 2100 parameters limit :
import math
df_num_of_cols=len(df.columns)
chunknum=math.floor(2100/df_num_of_cols)
df.to_sql('MY_TABLE',con=engine,schema='myschema',chunksize=chunknum,if_exists='append',method='multi',index=False )
来源:https://stackoverflow.com/questions/50689082/to-sql-pyodbc-count-field-incorrect-or-syntax-error