Mysql + big tables = slow queries?

前端 未结 2 1555
[愿得一人]
[愿得一人] 2021-01-13 20:54

I have some performance issues with a big table on Mysql : The table has got 38 million rows, its size is 3GB. I want to select by testing 2 columns : I tried many indexing

相关标签:
2条回答
  • 2021-01-13 21:26

    The main thing you can do is to add indexes.

    Any time that you use a column in a where clause, make sure it has an index. There isn't one on your created column.

    The multi-index including the created column in essence is NOT an index on created since created isn't first in the multi-index.

    When using multi-indexes, you should almost always put the column with higher cardinality first. So, having the indexes be: (created, word_id), (word_id) would give you a significant boost.

    0 讨论(0)
  • 2021-01-13 21:49

    A query with LIMIT 10000000,1 will always be very slow, because it needs to fetch more than 10 million rows (it ignores all except the last one). If your application needs such a query regularly, consider a redesign.

    Tables do not have a "beginning" and "end"; they aren't inherently ordered.

    It looks to me like you need an index on (word_id, created).

    You should DEFINITELY performance-test this on a non-production server with production-grade hardware.

    Incidentally, a 3Gb database isn't too big nowadays, it will fit in RAM on all but the smallest of servers (You are running a 64-bit OS, right, and have tuned innodb_buffer_pool appropriately? Or your sysadmin did?).

    0 讨论(0)
提交回复
热议问题