问题
I have situation where a "publisher" application essentially keeps a view model up to date by querying a VERY complex view and then merging the results into a denormalized view model table, using separate insert, update, and delete operations.
Now that we have upgraded to SQL 2008 I figured it would be a great time to update these with the SQL MERGE statement. However after writing the query, the subtree cost of the MERGE statement is 1214.54! With the old way, the sum of the Insert/Update/Delete was only 0.104!!
I can't figure out how a more straightforward way of describing the same exact operation could be so much crappier. Perhaps you can see the error of my ways where I cannot.
Some stats on the table: It has 1.9 million rows, and each MERGE operation inserts, updates, or deletes more than 100 of them. In my test case, only 1 is affected.
-- This table variable has the EXACT same structure as the published table
-- Yes, I've tried a temp table instead of a table variable, and it makes no difference
declare @tSource table
(
Key1 uniqueidentifier NOT NULL,
Key2 int NOT NULL,
Data1 datetime NOT NULL,
Data2 datetime,
Data3 varchar(255) NOT NULL,
PRIMARY KEY
(
Key1,
Key2
)
)
-- Fill the temp table with the desired current state of the view model, for
-- only those rows affected by @Key1. I'm not really concerned about the
-- performance of this. The result of this; it's already good. This results
-- in very few rows in the table var, in fact, only 1 in my test case
insert into @tSource
select *
from vw_Source_View with (nolock)
where Key1 = @Key1
-- Now it's time to merge @tSource into TargetTable
;MERGE TargetTable as T
USING tSource S
on S.Key1 = T.Key1 and S.Key2 = T.Key2
-- Only update if the Data columns do not match
WHEN MATCHED AND T.Data1 <> S.Data1 OR T.Data2 <> S.Data2 OR T.Data3 <> S.Data3 THEN
UPDATE SET
T.Data1 = S.Data1,
T.Data2 = S.Data2,
T.Data3 = S.Data3
-- Insert when missing in the target
WHEN NOT MATCHED BY TARGET THEN
INSERT (Key1, Key2, Data1, Data2, Data3)
VALUES (Key1, Key2, Data1, Data2, Data3)
-- Delete when missing in the source, being careful not to delete the REST
-- of the table by applying the T.Key1 = @id condition
WHEN NOT MATCHED BY SOURCE AND T.Key1 = @id THEN
DELETE
;
So how does this get to 1200 subtree cost? The data access from the tables themselves seems to be quite efficient. In fact, 87% of the cost of the MERGE seems to be from a Sort operation near the end of the chain:
MERGE (0%) <- Index Update (12%) <- Sort (87%) <- (...)
And that sort has 0 rows feeding into and out from it. Why does it take 87% of the resources to sort 0 rows?
UPDATE
I posted the actual (not estimated) execution plan for just the MERGE operation in a Gist.
回答1:
Subtree costs should be taken with a large grain of salt (and especially so when you have huge cardinality errors). SET STATISTICS IO ON; SET STATISTICS TIME ON;
output is a better indicator of actual performance.
The zero row sort doesn't take 87% of the resources. This problem in your plan is one of statistics estimation. The costs shown in the actual plan are still estimated costs. It doesn't adjust them to take account of what actually happened.
There is a point in the plan where a filter reduces 1,911,721 rows to 0 but the estimated rows going forward are 1,860,310. Thereafter all costs are bogus culminating in the 87% cost estimated 3,348,560 row sort.
The cardinality estimation error can be reproduced outside the Merge
statement by looking at the estimated plan for the Full Outer Join
with equivalent predicates (gives same 1,860,310 row estimate).
SELECT *
FROM TargetTable T
FULL OUTER JOIN @tSource S
ON S.Key1 = T.Key1 and S.Key2 = T.Key2
WHERE
CASE WHEN S.Key1 IS NOT NULL
/*Matched by Source*/
THEN CASE WHEN T.Key1 IS NOT NULL
/*Matched by Target*/
THEN CASE WHEN [T].[Data1]<>S.[Data1] OR
[T].[Data2]<>S.[Data2] OR
[T].[Data3]<>S.[Data3]
THEN (1)
END
/*Not Matched by Target*/
ELSE (4)
END
/*Not Matched by Source*/
ELSE CASE WHEN [T].[Key1]=@id
THEN (3)
END
END IS NOT NULL
That said however the plan up to the filter itself does look quite sub optimal. It is doing a full clustered index scan when perhaps you want a plan with 2 clustered index range seeks. One to retrieve the single row matched by the primary key from the join on source and the other to retrieve the T.Key1 = @id
range (though maybe this is to avoid the need to sort into clustered key order later?)
Perhaps you could try this rewrite and see if it works any better or worse
;WITH FilteredTarget AS
(
SELECT T.*
FROM TargetTable AS T WITH (FORCESEEK)
JOIN @tSource S
ON (T.Key1 = S.Key1
AND S.Key2 = T.Key2)
OR T.Key1 = @id
)
MERGE FilteredTarget AS T
USING @tSource S
ON (T.Key1 = S.Key1
AND S.Key2 = T.Key2)
-- Only update if the Data columns do not match
WHEN MATCHED AND S.Key1 = T.Key1 AND S.Key2 = T.Key2 AND
(T.Data1 <> S.Data1 OR
T.Data2 <> S.Data2 OR
T.Data3 <> S.Data3) THEN
UPDATE SET T.Data1 = S.Data1,
T.Data2 = S.Data2,
T.Data3 = S.Data3
-- Note from original poster: This extra "safety clause" turned out not to
-- affect the behavior or the execution plan, so I removed it and it works
-- just as well without, but if you find yourself in a similar situation
-- you might want to give it a try.
-- WHEN MATCHED AND (S.Key1 <> T.Key1 OR S.Key2 <> T.Key2) AND T.Key1 = @id THEN
-- DELETE
-- Insert when missing in the target
WHEN NOT MATCHED BY TARGET THEN
INSERT (Key1, Key2, Data1, Data2, Data3)
VALUES (Key1, Key2, Data1, Data2, Data3)
WHEN NOT MATCHED BY SOURCE AND T.Key1 = @id THEN
DELETE;
来源:https://stackoverflow.com/questions/7407560/t-sql-merge-performance-in-typical-publishing-context