pg-dump

pg_dump serial datatype issues

孤人 提交于 2019-12-10 09:33:36
问题 Could someone explain to me why a PostgreSQL table created with the following scripts: CREATE TABLE users ( "id" serial NOT NULL, "name" character varying(150) NOT NULL, "surname" character varying (250) NOT NULL, "dept_id" integer NOT NULL, CONSTRAINT users_pkey PRIMARY KEY ("id") ) gets dumped by pg_dump in the following format: CREATE TABLE users( "id" integer NOT NULL, "name" character varying(150) NOT NULL, "surname" character varying (250) NOT NULL, "dept_id" integer NOT NULL ); ALTER

pg_dump without comments on objects?

限于喜欢 提交于 2019-12-10 07:06:28
问题 Is there a way to perform a pg_dump and exclude the COMMENT ON for tables/views and columns ? I use extensively the COMMENT ON command to describe all objects, and often include newlines in them for clearer descriptions, e.g.: COMMENT ON TABLE mytable1 IS 'MAIN TABLE... NOTES: 1. ... 2. ... 3. ... '; However, since there are newlines in the dump as well, I cannot simply remove the comments with a grep -v 'COMMENT ON' command. Any other way to quickly remove these COMMENT ON from the dump ?

Too many command-line arguments when calling pg_dump from java

亡梦爱人 提交于 2019-12-07 21:18:31
问题 After running into an issue on executing some queries as strings in Java for postgres, I went using string arrays, which solved my existing issues. After the switch I am now having an issue with pg_dump , but not with pg_restore . When I supply my method with the following array: [time, ./pg_dump, -U, lehigh, -d, lehigh, -Fc, data/completedDb.dump] I get the following error: pg_dump: too many command-line arguments (first is "data/completedDb.dump") ProcessBuilder produces the following for

Postgresql DB backup Ideal practices

烈酒焚心 提交于 2019-12-07 21:07:27
问题 • What are ideal practices for taking PostgreSQL logical backup using pg_dump? • Is it ideal to take backup from a standby/slave node? If replication lag is less than 200ms • Is it ideal to take backup from standby/slave node, and is there any specific configuration we need to change? • Which method is a good way for taking backups logical backup or physical backup? where DB is getting updated frequently. As a backup is taken for disaster recovery which method is the faster and better backup

pg_dump & pg_restore password using python module subprocess

你离开我真会死。 提交于 2019-12-07 20:30:10
问题 Problem: Use the PSQL pg_dump and pg_restore in a Python script and using the subprocess module. Background: I am using the following python 2.7 script from the localhost (i.e. Ubuntu 14.04.5 LTS ) to create a backup of a table in a PSQL server (i.e. PostgreSQL 9.4.11 ) and restore it into the remote host (i.e. Ubuntu 16.04.2 LTS ) in a newer version of PSQL server (i.e. PostgreSQL 9.6.2 ). #!/usr/bin/python from subprocess import PIPE,Popen def dump_table(host_name,database_name,user_name

Doing pg_dump while still many active transaction

一个人想着一个人 提交于 2019-12-07 04:56:37
问题 As subjects, what will happen to the backup file while there is still many active transaction in the database. Does it export realtime or just partially backups ? thanks in advance. 回答1: pg_dump runs in a serializable transaction, so it sees a consistent snapshot of the database including system catalogs. However it is possible to get 'cache lookup failed' error if someone performs DDL changes while a dump is starting. The time window for this sort of thing isn't very large, but it can happen

will pg_restore overwrite the existing tables?

↘锁芯ラ 提交于 2019-12-06 19:01:11
问题 Say I have two host servers s1 and s2. In both the servers i have a schema named n1, now i have made some changes to some of the tables present in schema n1 of s1. I want the same change to be done to schema n1 of server s2. what i am planning to do is to take a backup of the schema n1 of server s1 using pg_dump and restore in the server s2 using pg_restore. Now my question is ,since there is already the same schema n1 in the server s2 with the same set of tables. what the restore process

Can I selectively create a backup of Postgres database, with only certian tables?

眉间皱痕 提交于 2019-12-06 10:18:40
问题 Can I programatically(or whichever way works fine) create the backup of a database, with only the tables I want? I have around 100 tables in my database and I want only 10 tables backup(ofcourse all are interdependant). How can I achieve this? And by the way I have a postgresql database. 回答1: Of course. pg_dump lets you pass list of tables with parameter -t To clear some doubts. True, the -t parameter accepts only one pattern. But it's a pattern very similar to regular expression, so if you

pg_dump & pg_restore password using python module subprocess

元气小坏坏 提交于 2019-12-06 10:00:19
Problem: Use the PSQL pg_dump and pg_restore in a Python script and using the subprocess module. Background: I am using the following python 2.7 script from the localhost (i.e. Ubuntu 14.04.5 LTS ) to create a backup of a table in a PSQL server (i.e. PostgreSQL 9.4.11 ) and restore it into the remote host (i.e. Ubuntu 16.04.2 LTS ) in a newer version of PSQL server (i.e. PostgreSQL 9.6.2 ). #!/usr/bin/python from subprocess import PIPE,Popen def dump_table(host_name,database_name,user_name,database_password,table_name): command = 'pg_dump -h {0} -d {1} -U {2} -p 5432 -t public.{3} -Fc -f /tmp

Too many command-line arguments when calling pg_dump from java

£可爱£侵袭症+ 提交于 2019-12-06 07:13:35
After running into an issue on executing some queries as strings in Java for postgres, I went using string arrays, which solved my existing issues. After the switch I am now having an issue with pg_dump , but not with pg_restore . When I supply my method with the following array: [time, ./pg_dump, -U, lehigh, -d, lehigh, -Fc, data/completedDb.dump] I get the following error: pg_dump: too many command-line arguments (first is "data/completedDb.dump") ProcessBuilder produces the following for my execution: time ./pg_dump -U lehigh -d lehigh -Fc data/completedDb.dump And it works fine when I add