Primary key id reaching limit of bigint data type

女生的网名这么多〃 提交于 2019-12-18 13:27:43

问题


I have a table that is exposed to large inserts and deletes on a regular basis (and because of this there are large gaps in the number sequence of the primary id column). It had a primary id column of type 'int' that was changed to 'bigint'. Despite this change, the limit of this datatype will also inevitably be exceeded at some point in the future (current usage would indicate this to be the case within the next year or so).

How do you handle this scenario? I'm wondering (shock horror) whether I even need the primary key column as it's not used in any obvious way in any queries or referenced by other tables etc. Would removing the column be a solution? Or would that sort of action see you expelled from the mysql community in disgust?!

We're already nearly at the 500 million mark for the auto increment id. The table holds keywords associated with file data in a separate table. Each file data row could have as many as 30 keywords associated with it in the keywords table, so they really start to stack up after you've got tens of thousands of files constantly being inserted and deleted. Currently the keyword table contains the keyword and the id of the file it's associated with, so if I got rid of the current primary id column, there would be no unique identifier other than the keyword (varchar) and file id (int) fields combined, which would be a terrible primary key.

All thoughts/answers/experiences/solutions very gratefully received.


回答1:


If you don't need that column because you have another identifier for a record which is unique. Like a supplied measurement_id or whatever or even a combined key: measurement_id + location_id it should not be needed to use an auto increment key. If there is any chance you won't have a unique key than make one for sure.

What if I need a very very big autoincrement ID?

Are you really sure you have so many inserts and deletes you will get to the limit?




回答2:


I know it has been already answered a year ago but just to continue on Luc Franken answer,

If you insert 500 million rows per second, it would take around 1173 years to reach the limit of the BIG INT. So yeah i think don't worry about that




回答3:


If we inserted 1 hundred thousand (100,000) records per second into the table then it would take 2,924,712 yrs

If we inserted 1 million (1,000,000) records per second into the table then it would take 292,471 yrs

If we inserted 10 million (10,000,000) records per second into the table then it would take 29,247 yrs

If we inserted 100 million records per second into the table then it would take 2,925 yrs

If we inserted 1000 million records per second into the table then it would take 292 yrs

So don't worry about it




回答4:


maybe it's bit too late, but you can add trigger in DELETE,

here is sample code in SQL SERVER

CREATE TRIGGER resetidentity
    ON dbo.[table_name]
    FOR DELETE
AS
    DECLARE @MaxID INT
    SELECT @MaxID = ISNULL(MAX(ID),0)
    FROM dbo.[table_name]
    DBCC CHECKIDENT('table_name', RESEED, @MaxID)
GO

In a nutshell, this will reset you ID (in case it is auto increment and primary). Ex: if you have 800 rows and deleted last 400 of them, next time you insert, it will start at 401 instead of 801.

But the down is it will not rearrange your ID if you delete it on the middle record.EX if you have 800 rows and deleted ID 200-400, ID still count at 801 next time you write new row(s)



来源:https://stackoverflow.com/questions/11591228/primary-key-id-reaching-limit-of-bigint-data-type

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!