How do I capture the data passed in SqlBulkCopy using the Sql Profiler?

前端 未结 2 1692
别跟我提以往
别跟我提以往 2021-02-08 12:16

I am using Sql Profiler all the time to capture the SQL statements and rerun problematic ones. Very useful.

However, some code uses the SqlBulkCopy API and I have no ide

相关标签:
2条回答
  • 2021-02-08 12:35

    You cann't capture SqlBulkCopy in SQL Profiler because SqlBulkCopy doesn't generate SQL at all when inserts data in SQL Server table. SqlBulkCopy works similar to bcp utility and loads data directly into SQL Server file system. It's even can ignore FKs and triggers when inserts the rows!

    0 讨论(0)
  • 2021-02-08 12:36

    Capturing event info for bulk insert operations ( BCP.EXE, SqlBulkCopy, and I assume BULK INSERT, and OPENROWSET(BULK... ) is possible, but you won't be able to see the individual rows and columns.

    Bulk Insert operations show up as a single (well, one per batch, and the default is to do all rows in a single batch) DML statement of:

    INSERT BULK <destination_table_name> (
          <column1_name> <column1_datatype> [ COLLATE <column1_collation> ], ...
          ) [ WITH (<1 or more hints>) ]
    
    <hints> := KEEP_NULLS, TABLOCK, ORDER(...), ROWS_PER_BATCH=, etc
    

    You can find the full list of "hints" on the MSDN page for the BCP Utility. Please note that SqlBulkCopy only supports a subset of those hints (e.g. KEEP_NULLS, TABLOCK, and a few others) but does not support ORDER(...) or ROWS_PER_BATCH= ** (which is quite unfortunate, actually, as the ORDER() hint is needed in order to avoid a sort that happens in tempdb in order to allow the operation to be minimally logged (assuming the other conditions for such an operation have also been satisfied).

    In order to see this statement, you need to capture any of the following events in SQL Server Profiler:

    SQL:BatchStarting
    SQL:BatchCompleted
    SQL:StmtStarting
    SQL:StmtCompleted

    You will also want to select, at least, the following columns (in SQL Server Profiler):

    TextData
    CPU
    Reads
    Writes
    Duration
    SPID
    StartTime
    EndTime
    RowCounts

    And, since a user cannot submit an INSERT BULK statement directly, you can probably filter on that in Column Filters if you merely want to see these events and nothing else.

    If you want to see the official notification that a BULK INSERT operation is beginning and/or ending, then you need to capture the following event:

    SQLTransaction

    and then add the following Profiler columns:

    EventSubClass
    ObjectName

    For ObjectName you will always get events showing "BULK INSERT" and whether that is beginning or ending is determined by the value in EventSubClass, which is either "0 - Begin" or "1 - Commit" (and I suppose if it fails you should see "2 - Rollback").

    If the ORDER() hint was not specified (and again, it cannot be specified when using SqlBulkCopy), then you will also get a "SQLTransaction" event showing "sort_init" in the ObjectName column. This event also has "0 - Begin" and "1 - Commit" events (as shown in the EventSubClass column).

    Finally, even though you cannot see the specific rows, you can still see operations against the Transaction Log (e.g. insert row, modify IAM row, modify PFS row, etc) if you capture the following event:

    TransactionLog

    and add the following Profiler column:

    ObjectID

    The main info of interest will be in the EventSubClass column, but unfortunately it is just ID values and I could not find any translation of those values in MSDN documentation. However, I did find the following blog post by Jonathan Kehayias: Using Extended Events in SQL Server Denali CTP1 to Map out the TransactionLog SQL Trace Event EventSubClass Values.

    @RBarryYoung pointed out that EventSubClass values and names can be found in the sys.trace_subclass_values catalog view, but querying that view shows that it has no rows for the TransactionLog event:

    SELECT * FROM sys.trace_categories -- 12 = Transactions
    SELECT * FROM sys.trace_events WHERE category_id = 12 -- 54 = TransactionLog
    SELECT * FROM sys.trace_subclass_values WHERE trace_event_id = 54 -- nothing :(
    

    ** Please note that the SqlBulkCopy.BatchSize property is equivalent to setting the -b option for BCP.EXE, which is an operational setting that controls how each command will break up the rows into sets. This is not the same as the ROWS_PER_BATCH= hint which does not physically control how the rows are broken up into sets, but instead allows SQL Server to better plan how it will allocate pages, and hence reduces the number of entries in the Transaction Log (sometimes by quite a bit). Still my testing showed that:

    • specifying -b for BCP.EXE did set the ROWS_PER_BATCH= hint to that same value.
    • specifying the SqlBulkCopy.BatchSize property did not set the ROWS_PER_BATCH= hint, BUT, the benefit of reduced Transaction Log activity was somehow definitely there (magic?). The fact that the net effect is to still gain the benefit is why I did not mention it towards the top when I said that it was unfortunate that the ORDER() hint was not supported by SqlBulkCopy.
    0 讨论(0)
提交回复
热议问题