Our server application receives information about rows to add to the database at a rate of 1000-2000 rows per second, all day long. There are two mutually-exclusive columns
I think splitting the giant DELETE statement into 2 DELETE may help.
1 DELETE to deal with tag and a separate DELETE to deal with longTag. This would help SQL server to choose to use indexes efficiently.
Of course you can still fire the 2 DELETE statements in 1 DB round-trip.
Hope this helps
Something like this could streamline the process (you would simply INSERT the rows, no matter if they already exist - no need for an up-front DELETE statement):
CREATE TRIGGER dbo.TR_MyTable_Merge
ON dbo.MyTable
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
DELETE MyTable
FROM MyTable t INNER JOIN inserted i ON t.tag = i.tag
DELETE MyTable
FROM MyTable t INNER JOIN inserted i ON t.longTag = i.longTag
INSERT MyTable
SELECT * FROM inserted
COMMIT TRANSACTION
SET NOCOUNT OFF;
END
EDIT: Previously Combined DELETE statement broken up into two separate statements, to enable optimal index use.
Not using DELETE at all, but rather UPDATEing the affected/duplicate rows in place will be easier on the indexes.