What happens when I exhaust a bigint generated key? How to handle it?

前端 未结 4 1275
夕颜
夕颜 2020-12-03 01:37

I can\'t imagine for myself a good answer for this, so I thought of asking it here. In my mind, I\'m always wondering what will happen if the AUTO INCREMENT PRIMARY ID

相关标签:
4条回答
  • 2020-12-03 02:14

    Once Autoincrement hit the limit for the field size, INSERTs will generate an error.

    In practical you will get the following type of error:

    ERROR 1467 (HY000): Failed to read auto-increment value from storage engine
    

    For more info visit:

    http://dev.mysql.com/doc/refman/5.1/en/example-auto-increment.html

    0 讨论(0)
  • 2020-12-03 02:15

    Big int is 2^63 or approximately 10^19. Database benchmarks used to be all the rage a few years ago, using a standardised TPC-C Benchmark

    As you can see the fastest relational score of 30,000,000 (3x10^7 transactions per minute) for a relational database. Keep in mind that profile will include a lot of reads, and it is very unlikely that same system can write 30,000,000 rows per minute.

    Assuming it is though, you will need approx 3x10^11 minutes to exhaust BigInt. In time measurement we'd understand, that's something like 6 million years

    ERROR 1467 (HY000): Failed to read auto-increment value from storage engine
    

    If you do run out, you'll get the above error message, and move across to Guid for primary key. 2^128, there are less digital bits on earth than that number (by a factor of quadrillion).

    0 讨论(0)
  • 2020-12-03 02:20

    It won't run out.

    The max bigint is 9223372036854775807 . At 1000 inserts/second that's 106751991167 days worth. Almost 300 million years, if my maths is right.

    Even if you partition it out, using offsets where say 100 servers each have a dedicated sub-range of the values (x*100+0 ... x*100+99), you're not going to run out. 10,000 machines doing 100,000 inserts/second might get you there in about three centuries. Of course, that's more transactions per second than the New York Stock Exchange for hundreds of years solid...

    If you do exceed the data type size limit of the generated key, new inserts will fail. In PostgreSQL (since you've tagged this PostgreSQL) with a bigserial you'll see:

    CREATE TABLE bigserialtest ( id bigserial primary key, dummy text );
    SELECT setval('bigserialtest_id_seq', 9223372036854775807);
    INSERT INTO bigserialtest ( dummy ) VALUES ('spam');
    
    ERROR:  nextval: reached maximum value of sequence "bigserialtest_id_seq" (9223372036854775807)
    

    For an ordinary serial you'll get a different error, because the sequence is always 64-bit, so you'll reach the point where you have to change the key type to bigint or get an error like:

    regress=# SELECT setval('serialtest_id_seq', 2147483647);
    regress=# INSERT INTO serialtest (dummy) VALUES ('ham');
    ERROR:  integer out of range
    

    If you truly believe that it's possible for your site to reach the limit on a bigint in your application, you could use a composite key - say (shard_id, subkey) - or a uuid key.

    Trying to deal with this in a new application is premature optimization. Seriously, from a new application to that kind of growth, will you be using the same schema? Or database engine? Or even codebase?

    You might as well worry about GUID collisions in GUID keyed systems. After all, the birthday paradox means that GUID collisions are more likely than you think - at incredibly, insanely unlikely.

    Furthermore, as Barry Brown points out in the comments, you'll never store that much data. This is only a concern for high churn tables with insanely high transaction rates. In those tables, the application just needs to be capable of coping with the key being reset to zero, entries renumbered, or other coping strategies. Honestly, though, even a high traffic message queue table isn't going to top out.

    See:

    • this IBM info on serial exhaustion
    • A recent blog post on this topic

    Seriously, even if you build the next Gootwitfacegram, this won't be a problem until way past the use-by date of your third application rewrite...

    0 讨论(0)
  • 2020-12-03 02:22

    Large data sets don't use incrementing numbers as keys. Not only because you have an implicit upper limit, but it also creates problems when you have multiple servers; because you run the risk of having duplicate primary keys since they are incremental.

    When you reach this limit on MySQL, you'll get a cryptic error like this:

    Error: Duplicate entry '0' for key 1
    

    Its better to use a unique id or some other sequence that you generate. MySQL doesn't support sequences, but postgresql does.

    0 讨论(0)
提交回复
热议问题