11 seconds to delete 240 rows in SQL Server

你离开我真会死。 提交于 2020-01-03 09:08:17

问题


i am running a delete statement:

DELETE FROM TransactionEntries
WHERE SessionGUID = @SessionGUID

The actual execution plan of the delete is:

Execution Tree
--------------
Clustered Index Delete(
   OBJECT:([GrobManagementSystemLive].[dbo].[TransactionEntries].IX_TransactionEntries_SessionGUIDTransactionGUID]), 
   WHERE:([TransactionEntries].[SessionGUID]=[@SessionGUID])
)

The table is clustered by SessionGUID, so the 240 rows are physically together.

The table has no triggers on it.

The operation takes:

  • Duration: 11821 ms
  • CPU: 297
  • Reads: 14340
  • Writes: 1707

The table contains 11 indexes:

  • 1 clustered index (SessionGUID)
  • 1 unique (primary key) index
  • 9 other non-unique, non-clustered indexes

How can i figure out why this delete operation is performing 14,340 reads, and takes 11 seconds?

  • the Avg. Disk Read Queue Length reaches 0.8
  • the Avg. Disk sec/Read never exceeds 4ms
  • the Avg. Disk Write Queue Length reaches 0.04
  • the Avg. Disk sec/Write never exceeds 4ms

What are the other reads for? The execution plan gives no indication of what it's reading.


Update:

EXECUTE sp_spaceused TransactionEntries

TransactionEntries  
  Rows      6,696,199
  Data:     1,626,496 KB (249 bytes per row)
  Indexes:  7,303,848 KB (1117 bytes per row)
  Unused:      91,648 KB    
            ============
  Reserved: 9,021,992 KB (1380 bytes per row)

With 1,380 bytes per row, and 240 rows, that's 340 kB to be deleted.

Counter intuitive that it can be so difficult for 340 kB.

Update Two: Fragmentation

Name                           Scan Density  Logical Fragmentation
=============================  ============  =====================
IX_TransactionEntries_Tran...  12.834        48.392
IX_TransactionEntries_Curr...  15.419        41.239
IX_TransactionEntries_Tran...  12.875        48.372
TransactionEntries17           98.081         0.0049325
TransactionEntries5            12.960        48.180
PK_TransactionEntries          12.869        48.376
TransactionEntries18           12.886        48.480
IX_TranasctionEntries_CDR...   12.799        49.157
IX_TransactionEntries_CDR...   12.969        48.103
IX_TransactionEntries_Tra...   13.181        47.127

i defragmented TransactionEntries17

DBCC INDEXDEFRAG (0, 'TransactionEntries', 'TransactionEntries17')

since INDEXDEFRAG is an "online operation" (i.e. it only holds IS Intent Shared locks). i was going to then manually defragment the others until the business operations called, saying that the system is dead - and they switched to doing everything on paper.

What say you; 50% fragmentation, and only 12% scan density, cause horrible index scan performance?


回答1:


As @JoeStefanelli points out in comments, it's the extra non-clustered indexes.

You are deleting 240 rows from the table.

This equates to 2640 index rows, 240 of which include all fields in the table.

Depending on how wide they are and how many included fields you have, this could equate to all the extra read activity you are seeing.

The non-clustered index rows will definitely NOT be grouped together on disk, which will increase delays.




回答2:


I think the indexing might be the likeliest culprit but I wanted to throw out another possibility. You mentioned no triggers, but are there any tables that have a foreign key relationship to this table? They would have to be checked to make sure no records are in them and if you have cascade delete turned on, those records would have to be deleted as well.




回答3:


Having banged my head on many-a-SQL performance issue, my standard operating procedure for something like this is to:

  1. Back up the data
  2. Delete one of the indexes on the table in question
  3. Measure the operation
  4. Restore DB
  5. Repeat w/#2 until #3 shows a drastic change. That's likely your culprit.


来源:https://stackoverflow.com/questions/7664204/11-seconds-to-delete-240-rows-in-sql-server

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!