This simple code produces deadlock. Simple Example Program included

后端 未结 3 892
伪装坚强ぢ
伪装坚强ぢ 2021-01-15 17:52

Code, notice the order of the values is different. So it alternates between locking rows:

static void Main( string[] args )
        {
            List

        
相关标签:
3条回答
  • 2021-01-15 18:00

    Code, notice the order of the values is different. So it alternates between locking rows

    No, it doesn't alternate. It acquires the locks in two different order. Deadlock is guaranteed.

    Is it possible not to ... update just those 2 rows in parallel?

    Not like that it isn't. What you're asking for is the definition of a deadlock. Something gotta give. The solution must come from your business logic, there should be no attempts to process the same set of IDs from distinct transactions. What that means, is entire business specific. IF you cannot achieve that, then basically you are just begging for deadlocks. There are some things you can do, but none is bulletproof and all come at great cost. The problem is higher up the chain.

    0 讨论(0)
  • 2021-01-15 18:08

    Agree with other answers as regards to the locking.

    The more pressing question is what are you hoping to gain from this? There's only one cable those commands are travelling down.

    You are probably making the overall performance worse by doing this. Far better to do your computation in parallel but serialize (and possibly batch) your updates.

    0 讨论(0)
  • 2021-01-15 18:10

    Your two statements acquire row locks in different order. That's a classic case for deadlocks. You can fix this by ensuring that the order of locks taken is always in some global order (for example, ordered by ID). You should probably coalesce the two UPDATE statements into one and sort the list of IDs on the client before sending it to SQL Server. For many typical UPDATE plans this actually works fine (not guaranteed, though).

    Or, you add retry logic in case you detect a deadlock (SqlException.Number == 1205). This is more elegant because it requires no deeper code changes. But deadlocks have performance implications so only do this for low deadlock rates.

    If your parallel processing generates lots of updates, you could INSERT all those updates into a temp table (which can be done concurrently) and when you are done you execute one big UPDATE that copies all the individual update records to the main table. You just change the join source in your sample queries.

    0 讨论(0)
提交回复
热议问题