Using more than one index per table is dangerous?

后端 未结 9 1984
春和景丽
春和景丽 2021-02-02 14:25

In a former company I worked at, the rule of thumb was that a table should have no more than one index (allowing the odd exception, and certain parent-tables holding references

相关标签:
9条回答
  • 2021-02-02 14:55

    That is utterly ridiculous. First, you need multiple indexes in order to perfom correctly. For instance, if you have a primary key, you automatically have an index. that means you can't index anything else with the rule you described. So if you don't index foreign keys, joins will be slow and if you don't index fields used in the where clause, queries will still be slow. Yes you can have too many indexes as they do take extra time to insert and update and delete records, but no more than one is not dangerous, it is a requirement to have a system that performs well. And I have found that users tolerate a longer time to insert better than they tolerate a longer time to query.

    Now the exception might be for a system that takes thousands of readings per second from some automated equipment. This is a database that generally doesn't have indexes to speed inserts. But usually these types of databases are also not used for reading, the data is transferred instead daily to a reporting database which is indexed.

    0 讨论(0)
  • 2021-02-02 14:57

    Yes, definitely - too many indexes on a table can be worse than no indexes at all. However, I don't think there's any good in having the "at most one index per table" rule.

    For SQL Server, my rule is:

    • index any foreign key fields - this helps JOINs and is beneficial to other queries, too
    • index any other fields when it makes sense, e.g. when lots of intensive queries can benefit from it

    Finding the right mix of indices - weighing the pros of speeding up queries vs. the cons of additional overhead on INSERT, UPDATE, DELETE - is not an exact science - it's more about know-how, experience, measuring, measuring, and measuring again.

    Any fixed rule is bound to be more contraproductive than anything else.....

    The best content on indexing comes from Kimberly Tripp - the Queen of Indexing - see her blog posts here.

    0 讨论(0)
  • 2021-02-02 14:58

    Optimizing the retrieval with indexes must be carefully designed to reflect actual query patterns. Surely, for a table with Primary Key, you will have at least one clustered index (that's how data is actually stored), then any additional indexes are taking advantage of the layout of the data (clustered index).
    After analyzing queries that execute against the table, you want to design an index(s) that cover them. That may mean building one or more indexes but that heavily depends on the queries themselves. That decision cannot be made just by looking at column statistics only.
    For tables where it's mostly inserts, i.e. ETL tables or something, then you should not create Primary Keys, or actually drop indexes and re-create them if data changes too quickly or drop/recreated entirely. I personally would be scared to step into an environment that has a hard-coded rule of indexes per table ratio.

    0 讨论(0)
  • 2021-02-02 15:01

    You need to create exactly as many indexes as you need to create. No more, no less. It is as simple as that.

    Everybody "knows" that an index will slow down DML statements on a table. But for some reason very few people actually bother to test just how "slow" it becomes in their context. Sometimes I get the impression that people think that adding another index will add several seconds to each inserted row, making it a game changing business tradeoff that some fictive hotshot user should decide in a board room.

    I'd like to share an example that I just created on my 2 year old pc, using a standard MySQL installation. I know you tagged the question SQL Server, but the example should be easily converted. I insert 1,000,000 rows into three tables. One table without indexes, one table with one index and one table with nine indexes.

    drop table numbers;
    drop table one_million_rows;
    drop table one_million_one_index;
    drop table one_million_nine_index;
    
    /*
    || Create a dummy table to assist in generating rows
    */
    create table numbers(n int);
    
    insert into numbers(n) values(0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
    
    /*
    || Create a table consisting of 1,000,000 consecutive integers
    */   
    create table one_million_rows as
        select d1.n + (d2.n * 10)
                    + (d3.n * 100)
                    + (d4.n * 1000)
                    + (d5.n * 10000)
                    + (d6.n * 100000) as n
          from numbers d1
              ,numbers d2
              ,numbers d3
              ,numbers d4
              ,numbers d5
              ,numbers d6;
    
    
    /*
    || Create an empty table with 9 integer columns.
    || One column will be indexed
    */
    create table one_million_one_index(
       c1 int, c2 int, c3 int
      ,c4 int, c5 int, c6 int
      ,c7 int, c8 int, c9 int
      ,index(c1)
    );
    
    /*
    || Create an empty table with 9 integer columns.
    || All nine columns will be indexed
    */
    create table one_million_nine_index(
       c1 int, c2 int, c3 int
      ,c4 int, c5 int, c6 int
      ,c7 int, c8 int, c9 int
      ,index(c1), index(c2), index(c3)
      ,index(c4), index(c5), index(c6)
      ,index(c7), index(c8), index(c9)
    );
    
    
    /*
    || Insert 1,000,000 rows in the table with one index
    */
    insert into one_million_one_index(c1,c2,c3,c4,c5,c6,c7,c8,c9)
    select n, n, n, n, n, n, n, n, n
      from one_million_rows;
    
    /*
    || Insert 1,000,000 rows in the table with nine indexes
    */
    insert into one_million_nine_index(c1,c2,c3,c4,c5,c6,c7,c8,c9)
    select n, n, n, n, n, n, n, n, n
      from one_million_rows;
    

    My timings are:

    • 1m rows into table without indexes: 0,45 seconds
    • 1m rows into table with 1 index: 1,5 seconds
    • 1m rows into table with 9 indexes: 6,98 seconds

    I'm better with SQL than statistics and math, but I'd like to think that: Adding 8 indexes to my table, added (6,98-1,5) 5,48 seconds in total. Each index would then have contributed 0,685 seconds (5,48 / 8) for all 1,000,000 rows. That would mean that the added overhead per row per index would have been 0,000000685 seconds. SOMEBODY CALL THE BOARD OF DIRECTORS!

    In conclusion, I'd like to say that the above test case doesn't prove a shit. It just shows that tonight, I was able to insert 1,000,000 consecutive integers into in a table in a single user environment. Your results will be different.

    0 讨论(0)
  • 2021-02-02 15:01

    Every table must have a PK, which is indexed of course (generally a clustered one), then every FK should be indexed as well.
    Finally you may want to index fields on which you often sort on, if their data is well differenciated: for a field with only 5 possible values in a table with 1 million records, an index will not be of a great benefit.
    I tend to be minimalistic with indexes, until the db starts beeing well filled, and ...slower. It is easy to identify the bottlenecks and add just the right the indexes at that point.

    0 讨论(0)
  • 2021-02-02 15:03

    Unless you like very slow reads, you should have indexes. Don't go overboard, but don't be afraid of being liberal about them either. EVERY FK should be indexed. You're going to do a look up each of these columns on inserts to other tables to make sure the references are set. The index helps. As well as the fact that indexed columns are used often in joins and selects.

    We have some tables that are inserted into rarely, with millions of records. Some of these tables are also quite wide. It's not uncommon for these tables to have 15+ indexes. Other tables with heavy inserting and low reads we might only have a handful of indexes- but one index per table is crazy.

    0 讨论(0)
提交回复
热议问题