SQL Server DELETE is slower with indexes

后端 未结 5 1624
忘掉有多难
忘掉有多难 2021-01-02 20:21

I have an SQL Server 2005 database, and I tried putting indexes on the appropriate fields in order to speed up the DELETE of records from a table with millions

相关标签:
5条回答
  • 2021-01-02 20:57

    I Agree with Bobs comment above - if you are deleting large volumes of data from large tables deleting the indices can take a while on top of deleting the data its the cost of doing business though. As it deletes all the data out you are causing reindexing events to happen.

    With regards to the logfile growth; if you arent doing anything with your logfiles you could switch to Simple logging; but i urge you to read up on the impact that might have on your IT department before you change.

    If you need to do the delete in real time; its often a good work around to flag the data as inactive either directly on the table or in another table and exclude that data from queries; then come back later and delete the data when the users aren't staring at an hourglass. There is a second reason for covering this; if you are deleting lots of data out of the table (which is what i am supposing based on your logfile issue) then you will likely want to do an indexdefrag to reorgnaise the index; doing that out of hours is the way to go if you dont like users on the phone !

    0 讨论(0)
  • 2021-01-02 21:03

    You can also try TSQL extension to DELETE syntax and check whether it improves performance:

    DELETE FROM big_table
    FROM big_table AS b
    INNER JOIN small_table AS s ON (s.id_product = b.id_product)
    WHERE s.id_category  =1
    
    0 讨论(0)
  • 2021-01-02 21:03

    Try something like this to avoid bulk delete (and thereby avoid log file growth)

    declare @continue bit = 1
    
    -- delete all ids not between starting and ending ids
    while @continue = 1
    begin
    
        set @continue = 0
    
        delete top (10000) u
        from    <tablename> u WITH (READPAST)
        where   <condition>
    
        if @@ROWCOUNT > 0
            set @continue = 1 
    
    end
    
    0 讨论(0)
  • 2021-01-02 21:11

    JohnB is deleting about 75% of the data. I think the following would have been a possible solution and probably one of the faster ones. Instead of deleting the data, create a new table and insert the data that you need to keep. Create the indexes on that new table after inserting the data. Now drop the old table and rename the new one to the same name as the old one.

    The above of course assumes that sufficient disk space is available to temporarily store the duplicated data.

    0 讨论(0)
  • 2021-01-02 21:21

    Indexes make lookups faster - like the index at the back of a book.

    Operations that change the data (like a DELETE) are slower, as they involve manipulating the indexes. Consider the same index at the back of the book. You have more work to do if you add, remove or change pages because you have to also update the index.

    0 讨论(0)
提交回复
热议问题