Export specific rows from a PostgreSQL table as INSERT SQL script

后端 未结 9 615
野趣味
野趣味 2020-11-30 17:01

I have a database schema named: nyummy and a table named cimory:

create table nyummy.cimory (
  id numeric(10,0) not null,
  name c         


        
相关标签:
9条回答
  • 2020-11-30 17:04

    I tried to write a procedure doing that, based on @PhilHibbs codes, on a different way. Please have a look and test.

     CREATE OR REPLACE FUNCTION dump(IN p_schema text, IN p_table text, IN p_where text)
       RETURNS setof text AS
     $BODY$
     DECLARE
         dumpquery_0 text;
         dumpquery_1 text;
         selquery text;
         selvalue text;
         valrec record;
         colrec record;
     BEGIN
    
         -- ------ --
         -- GLOBAL --
         --   build base INSERT
         --   build SELECT array[ ... ]
         dumpquery_0 := 'INSERT INTO ' ||  quote_ident(p_schema) || '.' || quote_ident(p_table) || '(';
         selquery    := 'SELECT array[';
    
         <<label0>>
         FOR colrec IN SELECT table_schema, table_name, column_name, data_type
                       FROM information_schema.columns
                       WHERE table_name = p_table and table_schema = p_schema
                       ORDER BY ordinal_position
         LOOP
             dumpquery_0 := dumpquery_0 || quote_ident(colrec.column_name) || ',';
             selquery    := selquery    || 'CAST(' || quote_ident(colrec.column_name) || ' AS TEXT),';
         END LOOP label0;
    
         dumpquery_0 := substring(dumpquery_0 ,1,length(dumpquery_0)-1) || ')';
         dumpquery_0 := dumpquery_0 || ' VALUES (';
         selquery    := substring(selquery    ,1,length(selquery)-1)    || '] AS MYARRAY';
         selquery    := selquery    || ' FROM ' ||quote_ident(p_schema)||'.'||quote_ident(p_table);
         selquery    := selquery    || ' WHERE '||p_where;
         -- GLOBAL --
         -- ------ --
    
         -- ----------- --
         -- SELECT LOOP --
         --   execute SELECT built and loop on each row
         <<label1>>
         FOR valrec IN  EXECUTE  selquery
         LOOP
             dumpquery_1 := '';
             IF not found THEN
                 EXIT ;
             END IF;
    
             -- ----------- --
             -- LOOP ARRAY (EACH FIELDS) --
             <<label2>>
             FOREACH selvalue in ARRAY valrec.MYARRAY
             LOOP
                 IF selvalue IS NULL
                 THEN selvalue := 'NULL';
                 ELSE selvalue := quote_literal(selvalue);
                 END IF;
                 dumpquery_1 := dumpquery_1 || selvalue || ',';
             END LOOP label2;
             dumpquery_1 := substring(dumpquery_1 ,1,length(dumpquery_1)-1) || ');';
             -- LOOP ARRAY (EACH FIELD) --
             -- ----------- --
    
             -- debug: RETURN NEXT dumpquery_0 || dumpquery_1 || ' --' || selquery;
             -- debug: RETURN NEXT selquery;
             RETURN NEXT dumpquery_0 || dumpquery_1;
    
         END LOOP label1 ;
         -- SELECT LOOP --
         -- ----------- --
    
     RETURN ;
     END
     $BODY$
       LANGUAGE plpgsql VOLATILE;
    

    And then :

    -- for a range
    SELECT dump('public', 'my_table','my_id between 123456 and 123459'); 
    -- for the entire table
    SELECT dump('public', 'my_table','true');
    

    tested on my postgres 9.1, with a table with mixed field datatype (text, double, int,timestamp without time zone, etc).

    That's why the CAST in TEXT type is needed. My test run correctly for about 9M lines, looks like it fail just before 18 minutes of running.

    ps : I found an equivalent for mysql on the WEB.

    0 讨论(0)
  • 2020-11-30 17:06

    You can make view of the table with specifit records and then dump sql file

    CREATE VIEW foo AS
    SELECT id,name,city FROM nyummy.cimory WHERE city = 'tokyo'
    
    0 讨论(0)
  • 2020-11-30 17:07

    SQL Workbench has such a feature.

    After running a query, right click on the query results and choose "Copy Data As SQL > SQL Insert"

    0 讨论(0)
  • 2020-11-30 17:10

    This is an easy and fast way to export a table to a script with pgAdmin manually without extra installations:

    1. Right click on target table and select "Backup".
    2. Select a file path to store the backup. As Format choose "Plain".
    3. Open the tab "Dump Options #2" at the bottom and check "Use Column Inserts".
    4. Click the Backup-button.
    5. If you open the resulting file with a text reader (e.g. notepad++) you get a script to create the whole table. From there you can simply copy the generated INSERT-Statements.

    This method also works with the technique of making an export_table as demonstrated in @Clodoaldo Neto's answer.

    0 讨论(0)
  • 2020-11-30 17:14

    For my use-case I was able to simply pipe to grep.

    pg_dump -U user_name --data-only --column-inserts -t nyummy.cimory | grep "tokyo" > tokyo.sql
    
    0 讨论(0)
  • 2020-11-30 17:15

    For a data-only export use COPY.
    You get a file with one table row per line as plain text (not INSERT commands), it's smaller and faster:

    COPY (SELECT * FROM nyummy.cimory WHERE city = 'tokio') TO '/path/to/file.csv';
    

    Import the same to another table of the same structure anywhere with:

    COPY other_tbl FROM '/path/to/file.csv';
    

    COPY writes and read files local to the server, unlike client programs like pg_dump or psql which read and write files local to the client. If both run on the same machine, it doesn't matter much, but it does for remote connections.

    There is also the \copy command of psql that:

    Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.

    0 讨论(0)
提交回复
热议问题