SQL Server Efficiently dropping a group of rows with millions and millions of rows

前端 未结 13 570
遇见更好的自我
遇见更好的自我 2021-02-04 12:26

I recently asked this question: MS SQL share identity seed amongst tables (Many people wondered why)

I have the following layout of a table:

Table: Star

相关标签:
13条回答
  • 2021-02-04 13:06

    This was the old technique in SQL 2000 , partitioned views and remains a valid option for SQL 2005. The problem does come in from having large quantity of tables and the maintenance overheads associated with them.

    As you say, partitioning is an enterprise feature, but is designed for this large scale data removal / rolling window effect.

    One other option would be running batched deletes to avoid creating 1 very large transaction, creating hundreds of far smaller transactions, to avoid lock escalations and keep each transaction small.

    0 讨论(0)
  • 2021-02-04 13:09

    My question is, is it a problem to have hundreds of thousands of tables in your SQL Server?

    Yes. It is a huge problem to have this many tables in your SQL Server. Every object has to be tracked by SQL Server as metadata, and once you include indexes, referential constraints, primary keys, defaults, and so on, then you are talking about millions of database objects.

    While SQL Server may theoretically be able to handle 232 objects, rest assured that it will start buckling under the load much sooner than that.

    And if the database doesn't collapse, your developers and IT staff almost certainly will. I get nervous when I see more than a thousand tables or so; show me a database with hundreds of thousands and I will run away screaming.

    Creating hundreds of thousands of tables as a poor-man's partitioning strategy will eliminate your ability to do any of the following:

    • Write efficient queries (how do you SELECT multiple categories?)
    • Maintain unique identities (as you've already discovered)
    • Maintain referential integrity (unless you like managing 300,000 foreign keys)
    • Perform ranged updates
    • Write clean application code
    • Maintain any sort of history
    • Enforce proper security (it seems evident that users would have to be able to initiate these create/drops - very dangerous)
    • Cache properly - 100,000 tables means 100,000 different execution plans all competing for the same memory, which you likely don't have enough of;
    • Hire a DBA (because rest assured, they will quit as soon as they see your database).

    On the other hand, it's not a problem at all to have hundreds of thousands of rows, or even millions of rows, in a single table - that's the way SQL Server and other SQL RDBMSes were designed to be used and they are very well-optimized for this case.

    The drop in O(1) is extremely desirable to me. Maybe there's a completely different solution I'm not thinking of?

    The typical solution to performance problems in databases is, in order of preference:

    • Run a profiler to determine what the slowest parts of the query are;
    • Improve the query, if possible (i.e. by eliminating non-sargable predicates);
    • Normalize or add indexes to eliminate those bottlenecks;
    • Denormalize when necessary (not generally applicable to deletes);
    • If cascade constraints or triggers are involved, disable those for the duration of the transaction and blow out the cascades manually.

    But the reality here is that you don't need a "solution."

    "Millions and millions of rows" is not a lot in a SQL Server database. It is very quick to delete a few thousand rows from a table of millions by simply indexing on the column you wish to delete from - in this case CategoryID. SQL Server can do this without breaking a sweat.

    In fact, deletions normally have an O(M log N) complexity (N = number of rows, M = number of rows to delete). In order to achieve an O(1) deletion time, you'd be sacrificing almost every benefit that SQL Server provides in the first place.

    O(M log N) may not be as fast as O(1), but the kind of slowdowns you're talking about (several minutes to delete) must have a secondary cause. The numbers do not add up, and to demonstrate this, I've gone ahead and produced a benchmark:


    Table Schema:

    CREATE TABLE Stars
    (
        StarID int NOT NULL IDENTITY(1, 1)
            CONSTRAINT PK_Stars PRIMARY KEY CLUSTERED,
        CategoryID smallint NOT NULL,
        StarName varchar(200)
    )
    
    CREATE INDEX IX_Stars_Category
    ON Stars (CategoryID)
    

    Note that this schema is not even really optimized for DELETE operations, it's a fairly run-of-the-mill table schema you might see in SQL server. If this table has no relationships, then we don't need the surrogate key or clustered index (or we could put the clustered index on the category). I'll come back to that later.

    Sample Data:

    This will populate the table with 10 million rows, using 500 categories (i.e. a cardinality of 1:20,000 per category). You can tweak the parameters to change the amount of data and/or cardinality.

    SET NOCOUNT ON
    
    DECLARE
        @BatchSize int,
        @BatchNum int,
        @BatchCount int,
        @StatusMsg nvarchar(100)
    
    SET @BatchSize = 1000
    SET @BatchCount = 10000
    SET @BatchNum = 1
    
    WHILE (@BatchNum <= @BatchCount)
    BEGIN
        SET @StatusMsg =
            N'Inserting rows - batch #' + CAST(@BatchNum AS nvarchar(5))
        RAISERROR(@StatusMsg, 0, 1) WITH NOWAIT
    
        INSERT Stars2 (CategoryID, StarName)
            SELECT
                v.number % 500,
                CAST(RAND() * v.number AS varchar(200))
            FROM master.dbo.spt_values v
            WHERE v.type = 'P'
            AND v.number >= 1
            AND v.number <= @BatchSize
    
        SET @BatchNum = @BatchNum + 1
    END
    

    Profile Script

    The simplest of them all...

    DELETE FROM Stars
    WHERE CategoryID = 50
    

    Results:

    This was tested on an 5-year old workstation machine running, IIRC, a 32-bit dual-core AMD Athlon and a cheap 7200 RPM SATA drive.

    I ran the test 10 times using different CategoryIDs. The slowest time (cold cache) was about 5 seconds. The fastest time was 1 second.

    Perhaps not as fast as simply dropping the table, but nowhere near the multi-minute deletion times you mentioned. And remember, this isn't even on a decent machine!

    But we can do better...

    Everything about your question implies that this data isn't related. If you don't have relations, you don't need the surrogate key, and can get rid of one of the indexes, moving the clustered index to the CategoryID column.

    Now, as a rule, clustered indexes on non-unique/non-sequential columns are not a good practice. But we're just benchmarking here, so we'll do it anyway:

    CREATE TABLE Stars
    (
        CategoryID smallint NOT NULL,
        StarName varchar(200)
    )
    
    CREATE CLUSTERED INDEX IX_Stars_Category
    ON Stars (CategoryID)
    

    Run the same test data generator on this (incurring a mind-boggling number of page splits) and the same deletion took an average of just 62 milliseconds, and 190 from a cold cache (outlier). And for reference, if the index is made nonclustered (no clustered index at all) then the delete time only goes up to an average of 606 ms.

    Conclusion:

    If you're seeing delete times of several minutes - or even several seconds then something is very, very wrong.

    Possible factors are:

    • Statistics aren't up to date (shouldn't be an issue here, but if it is, just run sp_updatestats);

    • Lack of indexing (although, curiously, removing the IX_Stars_Category index in the first example actually leads to a faster overall delete, because the clustered index scan is faster than the nonclustered index delete);

    • Improperly-chosen data types. If you only have millions of rows, as opposed to billions, then you do not need a bigint on the StarID. You definitely don't need it on the CategoryID - if you have fewer than 32,768 categories then you can even do with a smallint. Every byte of unnecessary data in each row adds an I/O cost.

    • Lock contention. Maybe the problem isn't actually delete speed at all; maybe some other script or process is holding locks on Star rows and the DELETE just sits around waiting for them to let go.

    • Extremely poor hardware. I was able to run this without any problems on a pretty lousy machine, but if you're running this database on a '90s-era Presario or some similar machine that's preposterously unsuitable for hosting an instance of SQL Server, and it's heavily-loaded, then you're obviously going to run into problems.

    • Very expensive foreign keys, triggers, constraints, or other database objects which you haven't included in your example, which might be adding a high cost. Your execution plan should clearly show this (in the optimized example above, it's just a single Clustered Index Delete).

    I honestly cannot think of any other possibilities. Deletes in SQL Server just aren't that slow.


    If you're able to run these benchmarks and see roughly the same performance I saw (or better), then it means the problem is with your database design and optimization strategy, not with SQL Server or the asymptotic complexity of deletions. I would suggest, as a starting point, to read a little about optimization:

    • SQL Server Optimization Tips (Database Journal)
    • SQL Server Optimization (MSDN)
    • Improving SQL Server Performance (MSDN)
    • SQL Server Query Processing Team Blog
    • SQL Server Performance (particularly their tips on indexes)

    If this still doesn't help you, then I can offer the following additional suggestions:

    • Upgrade to SQL Server 2008, which gives you a myriad of compression options that can vastly improve I/O performance;

    • Consider pre-compressing the per-category Star data into a compact serialized list (using the BinaryWriter class in .NET), and store it in a varbinary column. This way you can have one row per category. This violates 1NF rules, but since you don't seem to be doing anything with individual Star data from within the database anyway anyway, I doubt you'd be losing much.

    • Consider using a non-relational database or storage format, such as db4o or Cassandra. Instead of implementing a known database anti-pattern (the infamous "data dump"), use a tool that is actually designed for that kind of storage and access pattern.

    0 讨论(0)
  • 2021-02-04 13:10

    If you want to optimize on a category delete clustered composite index with category at the first place might do more good than damage.

    Also you could describe the relationships on the table.

    0 讨论(0)
  • 2021-02-04 13:16

    Having separate tables is partitioning - you are just managing it manually and do not get any management assistance or unified access (without a view or partitioned view).

    Is the cost of Enterprise Edition more expensive than the cost of separately building and maintaining a partitioning scheme?

    Alternatives to the long-running delete also include populating a replacement table with identical schema and simply excluding the rows to be deleted and then swapping the table out with sp_rename.

    I'm not understanding why whole categories of stars are being deleted on a regular basis? Presumably you are having new categories created all the time, which means your number of categories must be huge and partitioning on (manually or not) that would be very intensive.

    0 讨论(0)
  • 2021-02-04 13:20

    When you say deleting millions of rows is "too intense for SQL server", what do you mean? Do you mean that the log file grows too much during the delete?

    All you should have to do is execute the delete in batches of a fixed size:

    DECLARE @i INT
    SET @i = 1
    
    WHILE @i > 0
    BEGIN
        DELETE TOP 10000 FROM dbo.SuperBigTable
            WHERE CategoryID = 743
        SELECT @i = @@ROWCOUNT
    END
    

    If your database is in full recovery mode, you will have to run frequent transaction log backups during this process so that it can reuse the space in the log. If the database is in simple mode, you shouldn't have to do anything.

    My only other recommendation is to make sure that you have an appropriate index in CategoryId. I might even recommend that this be the clustered index.

    0 讨论(0)
  • 2021-02-04 13:21

    As Cade pointed out, adding a table for each category is manually partitioning the data, without the benefits of the unified access.

    There will never be any deletions for millions of rows that happen as fast as dropping a table, without the use of partitions.

    Therefore, it seems like using a separate table for each category may be a valid solution. However, since you've stated that some of these categories are kept, and some are deleted, here is a solution:

    1. Create a new stars table for each new category.
    2. Wait for the time period to expire where you decide whether the stars for the category are kept or not.
    3. Roll the records into the main stars table if you plan on keeping them.
    4. Drop the table.

    This way, you will have a finite number of tables, depending on the rate you add categories and the time period where you decide if you want them or not.

    Ultimately, for the categories that you keep, you're doubling the work, but the extra work is distributed over time. Inserts to the end of the clustered index may be experienced less by the users than deletes from the middle. However, for those categories that you're not keeping, you're saving tons of time.

    Even if you're not technically saving work, perception is often the bigger issue.

    0 讨论(0)
提交回复
热议问题