UPDATE or MERGE of very big tables in SQL Server

前端 未结 2 1464
野趣味
野趣味 2021-01-04 12:35

I need to perform a daily update of a very large (300M records) and broad TABLE1. The the source data for the updates is located in another table UTABLE

相关标签:
2条回答
  • 2021-01-04 13:01

    Actually i've found out general recommendations for such a queries: Idea to use SQL Merge or Update is a very clever one but it fails when we need to update many records (i.e. 75M) in a big and wide table (i.e. 240M).

    Looking at the query plan of the query below we can say that TABLE SCAN of TABLE1 and final MERGE are taking 90% of time.

    MERGE TABLE1 as Target  
    USING UTABLE as source  
    ON Target.record_id = source.record_id   
    WHEN MATCHED AND (condition) THEN   
        UPDATE SET Target.columns=source.columns
    

    So in order to use MERGE we need to:

    1. Reduce the number of rows we need to update and correctly pass this information to SQL Server. This can be done by making UTABLE smaller or specifying additional condition that narrows part to-be-merged.
    2. Make sure that part to-be-merged fits in memory otherwise query runs way slower. Making TABLE1 twice less reduced my real query time from 11 hours to 40 minutes.

    As Mark mentioned you can use UPDATE syntax and use WHERE clause to narrow part to-be-merged - this will give same results. Also please avoid indexing TABLE1 as this will cause additional work to rebuild index during MERGE

    0 讨论(0)
  • 2021-01-04 13:01

    First up I'd find out where your bottleneck is - is your CPU pegged or idle? In other words - is your IO subsystem able to handle the load properly?

    Recreating the full table is a lot of IO load, not to mention it'll take up a lot of space to basically have the table stored twice temporarily.

    Do you need to perform a MERGE - from what I can see a simple update should suffice. Example:

    UPDATE
        TABLE1
    SET
        ColumnX = UTABLE.ColumnX
        ...
    FROM
        TABLE1
    INNER JOIN
        UTABLE ON TABLE1.record_id = UTABLE.record_id
    

    You could batch up the updates using ROWCOUNT but that won't speed up the execution, it'll only help with reducing overall locking.

    Also - what kind of indexes do you have on the table? It may be faster to disable the indexes before the update and then rebuild them from scratch afterwards (only the nonclustered).

    0 讨论(0)
提交回复
热议问题