I have a postgres database that uses sequences extensively to generate primary keys of tables. After a lot of usage of this database i.e. Adding/Update/Delete operation the
It is possible to solve this problem, but is expensive for the database to do (especially IO) and is guaranteed to reoccur. I would not worry about this problem. If you get close to 4B, upgrade your primary and foreign keys to BIGSERIAL
and BIGINT
. If you're getting close to 2^64... well... I'd be interested in hearing more about your application. :~]
Postgres allows you to update PKs, although a lot of people think it's bad practice. So you could lock the table, and UPDATE
. (You can make an oldkey, newkey
table all sorts of ways, e.g., window function.) All the FK relationships have to be marked to cascade. Then you can reset the currval of the id sequence.
Personally, I would just use a BIGSERIAL
. If you have so many updates and deletes that you may run out even so, maybe there is some composite PK based on (say) a timestamp and id that would help you.
If you feel the need to fill gaps in auto-generated Posgresql sequence numbers, I have the feeling you need another field in your table, like some kind of "number" you increment programmatically, either in your code, or in a trigger.