to_sql pyodbc count field incorrect or syntax error

后端 未结 3 1576
死守一世寂寞
死守一世寂寞 2020-12-05 16:02

I am downloading Json data from an api website and using sqlalchemy, pyodbc and pandas\' to_sql function to insert that data into a MSSQL server.

I can download up

相关标签:
3条回答
  • 2020-12-05 16:12

    Made a few modifications based on Gord Thompson's answer. This will auto-calculate the chunksize and keep it to the lowest nearest integer value which fits in the 2100 parameters limit :

    import math
    df_num_of_cols=len(df.columns)
    chunknum=math.floor(2100/df_num_of_cols)
    df.to_sql('MY_TABLE',con=engine,schema='myschema',chunksize=chunknum,if_exists='append',method='multi',index=False )
    
    0 讨论(0)
  • 2020-12-05 16:29

    UPDATE:

    pandas 0.23.1 has reverted the problematic changes introduced in 0.23.0. However, the best solution for raw performance remains the CSV -> bcp approach as described below.

    UPDATE:

    pandas 0.24.0 apparently has re-introduced the issue (ref: here)


    (Original answer)

    Prior to pandas version 0.23.0, to_sql would generate a separate INSERT for each row in the DataTable:

    exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
        N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
        0,N'row000'
    exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
        N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
        1,N'row001'
    exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
        N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
        2,N'row002'
    

    Presumably to improve performance, pandas 0.23.0 now generates a table-value constructor to insert multiple rows per call

    exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6),@P3 int,@P4 nvarchar(6),@P5 int,@P6 nvarchar(6)',
        N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2), (@P3, @P4), (@P5, @P6)',
        0,N'row000',1,N'row001',2,N'row002'
    

    The problem is that SQL Server stored procedures (including system stored procedures like sp_prepexec) are limited to 2100 parameters, so if the DataFrame has 100 columns then to_sql can only insert about 20 rows at a time.

    We can calculate the required chunksize using

    # df is an existing DataFrame
    #
    # limit based on sp_prepexec parameter count
    tsql_chunksize = 2097 // len(df.columns)
    # cap at 1000 (limit for number of rows inserted by table-value constructor)
    tsql_chunksize = 1000 if tsql_chunksize > 1000 else tsql_chunksize
    #
    df.to_sql('tablename', engine, if_exists='replace', index=False, chunksize=tsql_chunksize)
    

    However, the fastest approach is still likely to be:

    • dump the DataFrame to a CSV file (or similar), and then

    • have Python call the SQL Server bcp utility to upload that file into the table.

    0 讨论(0)
  • 2020-12-05 16:29

    Don't have a reputation so I cannot comment on Amit S. I just tried this way, with chuknum calculated with the method set to 'multi' Still shows me the error:

    [Microsoft][SQL Server Native Client 11.0][SQL Server]The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request

    So I just modified:

    chunknum=math.floor(2100/df_num_of_cols) 
    

    to

    chunknum=math.floor(2100/df_num_of_cols) - 1
    

    It seems now working perfectly. I think should be some edge problem...

    0 讨论(0)
提交回复
热议问题