How to improve the speed of InnoDB writes per second of MySQL DB

前端 未结 4 587
再見小時候
再見小時候 2020-12-16 07:16

On my server, doing insert records into MySQL DB is very slow. Regarding the Server Status, InnoDB writes per second is around 20.

I am not an expert, just graduated

相关标签:
4条回答
  • 2020-12-16 07:31

    please set the innodb_buffer_pool_size to 512M. It may increase the performance

    SET GLOBAL innodb_buffer_pool_size=512M

    0 讨论(0)
  • 2020-12-16 07:38

    Recommendations could vary based on your implementation. Here are some notes copied directly from MySQL documentation:

    Bulk Data Loading Tips

    When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it with SET autocommit and COMMIT statements.

    Use the multiple-row INSERT syntax to reduce communication overhead between the client and the server if you need to insert many rows:

    INSERT INTO yourtable VALUES (1,2), (5,5), ...;

    If you are doing a huge batch insert, try avoiding the "select from last_insert_id" that follows the insert as it seriously slows down the insertions (to the order of making a 6 minute insert into a 13 hour insert) if you need the number for another insertion (a subtable perhaps) assign your own numbers to the id's (this obviously only works if you are sure nobody else is doing inserts at the same time).

    As mentioned already, you can increase the size of the InnoDB buffer pool (innodb_buffer_pool_size variable). This is generally a good idea because the default size is pretty small and most systems can accommodate lending more memory to the pool. This will increase the speed of most queries, especially SELECTs (as more records will be kept in the buffer between queries). The insert buffer is also a section of the buffer pool and will store recently inserted records, which will increase speed if you are basing future inserts on values from previous inserts. Hope this helps :)

    0 讨论(0)
  • 2020-12-16 07:39

    Some hints:

    • Minimize the number of indexes - there will be less index maintenance. This is obvously a trade-off with SELECT performance.
    • Maximize the number of INSERTs per transaction - the "durability price" will be less (i.e. physical writing to disk can be done in the background while the rest of the transaction is still executing, if the transaction is long enough). One large transaction will usually be faster than many small transaction, but this is obviously contingent on the actual logic you are trying to implement.
    • Move the table to a faster storage, such as SSD. Reads can be cached, but a durable transaction must be physically written to disk, so just caching is not enough.

    Also, it would be helpful if you could show us your exact database structure and the exact INSERT statement you are using.

    0 讨论(0)
  • 2020-12-16 07:42

    If using InnoDB engine+local disk, try to benchmark with innodb_flush_method = O_DSYNC. With O_DSYNC our bulk inserts (surrounded by TRANSACTION) was improved.

    Adjust the flush method

    In some versions of GNU/Linux and Unix, flushing files to disk with the Unix fsync() call (which InnoDB uses by default) and similar methods is surprisingly slow. If database write performance is an issue, conduct benchmarks with the innodb_flush_method parameter set to O_DSYNC.

    https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-diskio.html

    0 讨论(0)
提交回复
热议问题