MySQL optimizing INSERT speed being slowed down because of indices

前端 未结 5 2008
终归单人心
终归单人心 2020-12-03 01:42

MySQL Docs say :

The size of the table slows down the insertion of indexes by log N, assuming B-tree indexes.

Does this mean that for insertion of each new row

相关标签:
5条回答
  • 2020-12-03 02:16

    Dropping index will sure help anyway. Also consider using LOAD DATA. You can find some comparison and benchmarks here

    Also, when constructing PRIMARY KEY, use fields, that come first in your table, sequentially, i.e. switch places of second and third fields in structure.

    0 讨论(0)
  • 2020-12-03 02:20

    I have found in some cases inserting in transactions in medium chunks can help as it seems to sometimes allow some bulk operations. In other cases it has made things slower presumably due to locks and the overhead of transactions.

    0 讨论(0)
  • 2020-12-03 02:30

    If you are doing a bulk insert of a million rows, then dropping the index, doing the insert, and rebuilding the index will probably be faster. However, if your problem is that single row inserts are taking too long then you have other problems (like not enough memory) and dropping the index will not help much.

    0 讨论(0)
  • 2020-12-03 02:31

    Building/rebuilding the index is what you're trying to speed up. If you must have this table/key structure, faster hardware and/or tweaking the server configuration to speed up the index build is likely the answer - be sure your server and settings are such that it can be accomplished in memory.

    Otherwise, think about making trade-offs with the structure that would improve insert speeds. Alternatively, think about ways you can happily live with a 3 minute insert.

    0 讨论(0)
  • 2020-12-03 02:32

    If you want fast inserts, first thing you need is proper hardware. That assumes sufficient amount of RAM, an SSD instead of mechanical drives and rather powerful CPU.

    Since you use InnoDB, what you want is to optimize it since default config is designed for slow and old machines.

    Here's a great read about configuring InnoDB

    After that, you need to know one thing - and that's how databases do their stuff internally, how hard drives work and so on. I'll simplify the mechanism in the following description:

    A transaction is MySQL waiting for the hard drive to confirm that it wrote the data. That's why transactions are slow on mechanical drives, they can do 200-400 input-output operations per second. Translated, that means you can get 200ish insert queries per second using InnoDB on a mechanical drive. Naturally, this is simplified explanation, just to outline what's happening, it's not the full mechanism behind transaction.

    Since a query, especially the one corresponding to size of your table, is relatively small in terms of bytes - you're effectively wasting precious IOPS on a single query.

    If you wrap multiple queries (100 or 200 or more, there's no exact number, you have to test) in a single transaction and then commit it - you'll instantly achieve more writes per second.

    Percona guys are achieving 15k inserts a second on a relatively cheap hardware. Even 5k inserts a second isn't bad. The table such as yours is small, I've done tests on a similar table (3 columns more) and I managed to get to 1 billion records without noticeable issues, using 16gb ram machine with a 240GB SSD (1 drive, no RAID, used for testing purposes).

    TL;DR: - follow the link above, configure your server, get an SSD, wrap multiple inserts in 1 transactions and profit. And don't turn indexing off and then on, it's not applicable always, because at some point you will spend processing and IO time to build them.

    0 讨论(0)
提交回复
热议问题