问题
We have a table that looks like this:
appointment_id | team_id
----------------|---------
1001 | 1
1005 | 4
1009 | 7
In this table appointment_id
is the primary index and team_id
is just a regular index.
The code for creating the table:
CREATE TABLE `appointment_primary_teams` (
`appointment_id` int(11) NOT NULL,
`team_id` int(11) NOT NULL,
PRIMARY KEY (`appointment_id`),
KEY `team_id` (`team_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
However, occasionally the below code fails:
// Even though it looks like we are making 2 different PDO connections here
// the return is the same instance of PDO shared by 2 instances of a class for
// running queries. (It is how our system allows 2 different prepared queries
// at the same time)
$remove_query = database::connect('master_db');
$insert_query = database::connect('master_db');
$remove_query->prepare("
DELETE FROM `appointment_primary_teams` WHERE appointment_id = :appointment_id
");
$insert_query->prepare("
INSERT INTO `appointment_primary_teams` (
`appointment_id`,
`team_id`
) VALUES (
:appointment_id,
:team_id
)
");
// Looping through a list of appointment data
foreach($appointments as $appointment) {
// Runs fine
$remove_query->bind(':appointment_id', $appointment['id'], CAST_INT);
$remove_query->run();
// Occasionlly errors saying $appointment['id'] already exists
$insert_query->bind(':appointment_id', $appointment['id'], CAST_INT);
$insert_query->bind(':team_id', $appointment['team_id'], CAST_INT);
$insert_query->run();
}
The exact error is:
Database Error: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '1001' for key 'PRIMARY'
At first, I thought this was a race condition within our API where the user was double clicking a submit button, but our system logs all requests and I can confirm that the user is only sending 1 request.
I am assuming that this is failing due to some type of race condition within MySQL, however I am unsure of how to prevent it. If that's true, I could just tell the script to sleep for a few milliseconds, but that's not an ideal solution, because if the DB hangs at all the issue can come back.
My Question: What is causing this issue, and how do I prevent this error?
This for an Amazon RDS server (MySQL 5.6.27); PHP is version 7.0.27 running on Ngnix 1.13.9 on Amazon Linux AMI release 2017.09.
NOTE: Some of the code has been changed to remove proprietary information and simplify the issue, however I have preserved all the functionality of the code.
Update 1
To be clear, despite the code shown there is only 1 instance of a PDO connection in use. After the running this code the connection IDs came back as the same, meaning that it's the same connection to MySQL.
Update 2
This ended up being somehow a race-condition within MySQL itself; My best guess is the race condition is either in query queuing (where MySQL is returning to PHP before the query has run completely) or MySQL's in-memory indexes (where MySQL isn't updating the index by the time the next query is run)
I have done several versions of tests to try and make sure that's what's going on, and all the tests point to that. If I had to guess this could probably be fixed by one of AWS's configuration files, but at this point I have no choice but resort to the REPLACE INTO
syntax as tadman has suggested.
回答1:
The most reliable way to fix a race condition is to avoid having a sequencing problem in the first place. Replace the pair of queries with one query:
INSERT INTO `appointment_primary_teams` (
`appointment_id`,
`team_id`
) VALUES (
:appointment_id,
:team_id
) ON DUPLICATE KEY UPDATE team_id=VALUES(team_id)
This is an atomic operation and it will either insert a record or update an existing record, no DELETE
required. This is a good general-purpose approach to maintaining these sorts of relationship records.
The alternative is the more heavy-handed REPLACE INTO
approach:
REPLACE INTO `appointment_primary_teams` (
`appointment_id`,
`team_id`
) VALUES (
:appointment_id,
:team_id
)
This stomps any existing records. The down-side to this is it acts like an atomic DELETE
/INSERT
pair which allocates new PRIMARY KEY
values if those are AUTO_INCREMENT
. In your case it isn't, so this is not an issue.
The way you get race conditions like that is that the INSERT
query must be running at the same time as the DELETE
query. That's only possible if there's two connections, and that could be because there's two requests being received simultaneously that are both attempting to alter the record, or because the single instance is somehow running both queries in parallel.
来源:https://stackoverflow.com/questions/49499654/delete-then-insert-occasionally-fails-with-duplicate-key