问题
I am working with a PostgreSQL 8.4.13 database.
Recently I had around around 86.5 million records in a table. I deleted almost all of them - only 5000 records are left now. I ran:
vacuum full
after deleting the rows and that returned disk space to the OS (thx to suggestion from fellow SO member)
But I see that my id numbers are still stuck at millions. For ex:
id | date_time | event_id | secs_since_1970 | value
---------+-------------------------+----------+-----------------+-----------
61216287 | 2013/03/18 16:42:42:041 | 6 | 1363646562.04 | 46.4082
61216289 | 2013/03/18 16:42:43:041 | 6 | 1363646563.04 | 55.4496
61216290 | 2013/03/18 16:42:44:041 | 6 | 1363646564.04 | 40.0553
61216291 | 2013/03/18 16:42:45:041 | 6 | 1363646565.04 | 38.5694
In an attempt to start the id
value at 1
again for the remaining rows, I tried:
cluster mytable_pkey on mytable;
where mytable
is the name of my table. But that did not help.
So, my question(s) is/are:
- Is there a way to get the index (id value) to start at 1 again?
- If I add or update the table with a new record, will it start from 1 or pick up the next highest integer value (say 61216292 in above example)?
My table description is as follows: There is no FK constraint and no sequence in it.
jbossql=> \d mytable;
Table "public.mytable"
Column | Type | Modifiers
-----------------+------------------------+-----------
id | bigint | not null
date_time | character varying(255) |
event_id | bigint |
secs_since_1970 | double precision |
value | real |
Indexes:
"mydata_pkey" PRIMARY KEY, btree (id) CLUSTER
回答1:
Drop the primary key fisrt and create a temporary sequence.
alter table mytable drop constraint mydata_pkey;
create temporary sequence temp_seq;
Use the sequence to update:
update mytable
set id = nextval('temp_seq');
Recreate the primary key and drop the sequence
alter table mytable add primary key (id);
drop sequence temp_seq;
If there is a foreign key dependency on this table then you will have to deal with it first and the update will be a more complex procedure.
回答2:
Is your primary key defined using a serial? If so that creates an implicit sequence. You can use ALTER SEQUENCE (see: http://www.postgresql.org/docs/8.2/static/sql-altersequence.html for syntax) to change the starting number back to 1.
Based on the fact that you have some records left (just noticed the 5000 left), you DO NOT want to reset that number to a number before the last ID of the remaining records because then that sequence will generate non-unique numbers. The point of using a sequence is it gives you a transactional way to increment a number and guarantee successive operations get unique incremented numbers.
回答3:
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
--
-- Note: "deferrable initially deferred" appears to be the default
--
CREATE TABLE funky
( id SERIAL NOT NULL PRIMARY KEY DEFERRABLE INITIALLY DEFERRED
, tekst varchar
);
-- create some data with gaps in it
INSERT INTO funky(id, tekst)
SELECT gs, 'Number:' || gs::text
FROM generate_series(1,100,10) gs
;
-- set the sequence to the max occuring id
SELECT setval('funky_id_seq' , mx.mx)
FROM (SELECT max(id) AS mx FROM funky) mx
;
SELECT * FROM funky ;
-- compress the keyspace, making the ids consecutive
UPDATE funky xy
SET id = self.newid
FROM (
SELECT id AS id
, row_number() OVER (ORDER BY id) AS newid
FROM funky
) self
WHERE self.id = xy.id
;
-- set the sequence to the new max occuring id
SELECT setval('funky_id_seq' , mx.mx)
FROM (SELECT max(id) AS mx FROM funky) mx
;
SELECT * FROM funky ;
Result:
CREATE TABLE
INSERT 0 10
setval
--------
91
(1 row)
id | tekst
----+-----------
1 | Number:1
11 | Number:11
21 | Number:21
31 | Number:31
41 | Number:41
51 | Number:51
61 | Number:61
71 | Number:71
81 | Number:81
91 | Number:91
(10 rows)
UPDATE 10
setval
--------
10
(1 row)
id | tekst
----+-----------
1 | Number:1
2 | Number:11
3 | Number:21
4 | Number:31
5 | Number:41
6 | Number:51
7 | Number:61
8 | Number:71
9 | Number:81
10 | Number:91
(10 rows)
WARNING WARNING WARNING WARNING WARNING WARNING ACHTUNG:
Changing key values is generally a terrible idea. Avoid it at all cost.
来源:https://stackoverflow.com/questions/15526813/restart-primary-key-numbers-of-existing-rows-after-deleting-most-of-a-big-table