How to improve INSERT INTO … SELECT locking behavior

后端 未结 9 534
一个人的身影
一个人的身影 2020-12-05 06:46

In our production database, we ran the following pseudo-code SQL batch query running every hour:

INSERT INTO TemporaryTable
    (SELECT FROM HighlyContentiou         


        
相关标签:
9条回答
  • 2020-12-05 06:46

    I'm not familiar with MySQL, but hopefully there is an equivalent to the transaction isolation levels Snapshot and Read committed snapshot in SQL Server. Using any of these should solve your problem.

    0 讨论(0)
  • 2020-12-05 06:47

    Disclaimer: I'm not very experienced with databases, and I'm not sure if this idea is workable. Please correct me if it's not.

    How about setting up a secondary equivalent table HighlyContentiousTableInInnoDb2, and creating AFTER INSERT etc. triggers in the first table which keep the new table updated with the same data. Now you should be able to lock HighlyContentiousTableInInnoDb2, and only slow down the triggers of the primary table, instead of all queries.

    Potential problems:

    • 2 x data stored
    • Additional work for all inserts, updates and deletes
    • Might not be transactionally sound
    0 讨论(0)
  • 2020-12-05 06:47

    The reason for the lock (readlock) is to secure your reading transaction not to read "dirty" data a parallel transaction might be currently writing. Most DBMS offer the setting that users can set and revoke read & write locks manually. This might be interesting for you if reading dirty data is not a problem in your case.

    I think there is no secure way to read from a table without any locks in a DBS with multiple transactions.

    But the following is some brainstorming: if space is no issue, you can think about running two instances of the same table. HighlyContentiousTableInInnoDb2 for your constantly read/write transaction and a HighlyContentiousTableInInnoDb2_shadow for your batched access. Maybe you can fill the shadow table automated via trigger/routines inside your DBMS, which is faster and smarter that an additional write transaction everywhere.

    Another idea is the question: do all transactions need to access the whole table? Otherwise you could use views to lock only necessary columns. If the continuous access and your batched access are disjoint regarding columns, it might be possible that they don't lock each other!

    0 讨论(0)
  • 2020-12-05 06:58

    I was facing the same issue using CREATE TEMPORARY TABLE ... SELECT ... with SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction.

    Based on your initial query, my problem was solved by locking the HighlyContentiousTableInInnoDb before starting the query.

    LOCK TABLES HighlyContentiousTableInInnoDb READ;
    INSERT INTO TemporaryTable
        (SELECT FROM HighlyContentiousTableInInnoDb
        WHERE allKindsOfComplexConditions are true)
    UNLOCK TABLES;
    
    0 讨论(0)
  • 2020-12-05 07:01

    The answer to this question is much easier now: - Use Row Based Replication and Read Committed isolation level.

    The locking you were experiencing disappears.

    Longer explaination: http://harrison-fisk.blogspot.com/2009/02/my-favorite-new-feature-of-mysql-51.html

    0 讨论(0)
  • 2020-12-05 07:01

    You can set binlog format like that:

    SET GLOBAL binlog_format = 'ROW';
    

    Edit my.cnf if you want to make if permanent:

    [mysqld]
    binlog_format=ROW
    

    Set isolation level for the current session before you run your query:

    SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    INSERT INTO t1 SELECT ....;
    

    If this doesn't help you should try setting isolation level server wide and not only for the current session:

    SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    

    Edit my.cnf if you want to make if permanent:

    [mysqld]
    transaction-isolation = READ-UNCOMMITTED
    

    You can change READ-UNCOMMITTED to READ-COMMITTED which is a better isolation level.

    0 讨论(0)
提交回复
热议问题