“The total number of locks exceeds the lock table size” Deleting 267 Records

后端 未结 3 516
隐瞒了意图╮
隐瞒了意图╮ 2020-12-17 18:27

I\'m trying to delete 267 records out of about 40 million. The query looks like:

delete from pricedata
where
pricedate > \'20120413\'

p

相关标签:
3条回答
  • 2020-12-17 18:52

    What worked: changing innodb_buffer_pool_size to 256M (see comments under Quassnoi's original comment).

    0 讨论(0)
  • 2020-12-17 19:00

    It seems that you don't have an index on pricedate (or MySQL does not use this index for some reason).

    With REPEATABLE READ (the default transaction isolation level), InnoDB places shared locks on the records read and filtered out by the query and it seems you don't have enough space for 40M locks.

    To work around this problem use any of these solutions:

    1. Create the index on pricedate if it's not there (may take time)

    2. Break your query into smaller chunks:

      DELETE
      FROM    pricedata
      WHERE   pricedate > '20120413'
              AND id BETWEEN 1 AND 1000000
      
      DELETE
      FROM    pricedata
      WHERE   pricedate > '20120413'
              AND id BETWEEN 1000001 AND 2000000
      

      etc. (change the id ranges as needed). Note that each statement should be run in its own transaction (don't forget to commit after each statement if AUTOCOMMIT is off).

    3. Run the DELETE query with READ COMMITTED transaction isolation level. It will make InnoDB lift locks from the records as soon as they are read. This will not work if you are using binary log in statement mode and don't allow binlog-unsafe queries (this is the default setting).

    0 讨论(0)
  • 2020-12-17 19:01

    (A late answer, but alwayx good to have it when people find this issue in google)

    A solution without having to alter the innodb_buffer_pool_size or creating an index can be to limit the amount of rows to be deleted.

    So, in your case DELETE from pricedata where pricedata > '20120413' limit 100; for example. This will remove 100 rows and leave 167 behind. So, you can run the same query again and delete another 100. For the last 67 it's tricky... when the amount of rows left in the database is less than the given limit you will again end up with the error about the number of locks. Probably because the server will search for more matching rows to fill up to the 100. In this case, use limit 67 to delete the last part. (Ofcourse you could use limit 267 already in the beginning as well)

    And for those who like to script... a nice example I used in a bash script to cleanup old data :

       # Count the number of rows left to be deleted
       QUERY="select count(*) from pricedata where pricedata > '20120413';"
       AMOUNT=`${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB} | tail -1`
       ERROR=0
       while [ ${AMOUNT} -gt 0 -a ${ERROR} -eq 0 ]
       do
          ${LOGGER} "   ${AMOUNT} rows left to delete"
          if [ ${AMOUNT} -lt 1000 ]
          then
             LIMIT=${AMOUNT}
          else
             LIMIT=1000
          fi
          QUERY="delete low_priority from pricedata where pricedata > '20120413' limit ${LIMIT};"
          ${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB}
          STATUS=$?
          if [ ${STATUS} -ne 0 ]
          then
             ${LOGGER} "Cleanup failed for ${TABLE}"
             ERROR=1
          fi
          QUERY="select count(*) from pricedata where pricedata > '20120413';"
          AMOUNT=`${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB} | tail -1`
       done
    
    0 讨论(0)
提交回复
热议问题