Import MySQL dump to PostgreSQL database

后端 未结 17 785
北海茫月
北海茫月 2020-12-04 10:05

How can I import an \"xxxx.sql\" dump from MySQL to a PostgreSQL database?

相关标签:
17条回答
  • 2020-12-04 10:14

    I have this bash script to migrate the data, it doesn't create the tables because they are created in migration scripts, so I need only to convert the data. I use a list of the tables to not import data from the migrations and sessions tables. Here it is, just tested:

    #!/bin/sh
    
    MUSER="root"
    MPASS="mysqlpassword"
    MDB="origdb"
    MTABLES="car dog cat"
    PUSER="postgres"
    PDB="destdb"
    
    mysqldump -h 127.0.0.1 -P 6033 -u $MUSER -p$MPASS --default-character-set=utf8 --compatible=postgresql --skip-disable-keys --skip-set-charset --no-create-info --complete-insert --skip-comments --skip-lock-tables $MDB $MTABLES > outputfile.sql
    
    sed -i 's/UNLOCK TABLES;//g' outputfile.sql
    sed -i 's/WRITE;/RESTART IDENTITY CASCADE;/g' outputfile.sql
    sed -i 's/LOCK TABLES/TRUNCATE/g' outputfile.sql
    sed -i "s/'0000\-00\-00 00\:00\:00'/NULL/g" outputfile.sql
    sed -i "1i SET standard_conforming_strings = 'off';\n" outputfile.sql
    sed -i "1i SET backslash_quote = 'on';\n" outputfile.sql
    sed -i "1i update pg_cast set castcontext='a' where casttarget = 'boolean'::regtype;\n" outputfile.sql
    echo "\nupdate pg_cast set castcontext='e' where casttarget = 'boolean'::regtype;\n" >> outputfile.sql
    
    psql -h localhost -d $PDB -U $PUSER -f outputfile.sql
    

    You will get a lot of warnings you can safely ignore like this:

    psql:outputfile.sql:82: WARNING:  nonstandard use of escape in a string literal
    LINE 1: ...,(1714,38,2,0,18,131,0.00,0.00,0.00,0.00,NULL,'{\"prospe...
                                                             ^
    HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.
    
    0 讨论(0)
  • 2020-12-04 10:14

    With pgloader

    Get a recent version of pgloader; the one provided by Debian Jessie (as of 2019-01-27) is 3.1.0 and won't work since pgloader will error with

    Can not find file mysql://...
    Can not find file postgres://...
    

    Access to MySQL source

    First, make sure you can establish a connection to mysqld on the server running MySQL using

    telnet theserverwithmysql 3306
    

    If that fails with

    Name or service not known

    log in to theserverwithmysql and edit the configuration file of mysqld. If you don't know where the config file is, use find / -name mysqld.cnf.

    In my case I had to change this line of mysqld.cnf

    # By default we only accept connections from localhost
    bind-address    = 127.0.0.1
    

    to

    bind-address    = *
    

    Mind that allowing access to your MySQL database from all addresses can pose a security risk, meaning you probably want to change that value back after the database migration.

    Make the changes to mysqld.cnf effective by restarting mysqld.

    Preparing the Postgres target

    Assuming you are logged in on the system that runs Postgres, create the database with

    createdb databasename
    

    The user for the Postgres database has to have sufficient privileges to create the schema, otherwise you'll run into

    permission denied for database databasename

    when calling pgloader. I got this error although the user had the right to create databases according to psql > \du.

    You can make sure of that in psql:

    GRANT ALL PRIVILEGES ON DATABASE databasename TO otherusername;
    

    Again, this might be privilege overkill and thus a security risk if you leave all those privileges with user otherusername.

    Migrate

    Finally, the command

    pgloader mysql://theusername:thepassword@theserverwithmysql/databasename postgresql://otherusername@localhost/databasename
    

    executed on the machine running Postgres should produce output that ends with a line like this:

    Total import time          ✓     877567   158.1 MB       1m11.230s
    
    0 讨论(0)
  • 2020-12-04 10:18

    If you are using phpmyadmin you can export your data as CSV and then it will be easier to import in postgres.

    0 讨论(0)
  • 2020-12-04 10:19

    This question is a little old but a few days ago I was dealing with this situation and found pgloader.io.

    This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lisp) with the following 3 lines:

    /* content of the script.lisp */
    LOAD DATABASE
    FROM mysql://dbuser@localhost/dbname
    INTO postgresql://dbuser@localhost/dbname;
    
    
    /*run this in the terminal*/
    pgloader script.lisp
    

    And after that your postgresql DB will have all of the information that you had in your MySQL SB.

    On a side note, make sure you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)

    0 讨论(0)
  • 2020-12-04 10:20

    Here is a simple program to create and load all tables in a mysql database (honey) to postgresql. Type conversion from mysql is coarse-grained but easily refined. You will have to recreate the indexes manually:

    import MySQLdb
    from magic import Connect #Private mysql connect information
    import psycopg2
    
    dbx=Connect()
    DB=psycopg2.connect("dbname='honey'")
    DC=DB.cursor()
    
    mysql='''show tables from honey'''
    dbx.execute(mysql); ts=dbx.fetchall(); tables=[]
    for table in ts: tables.append(table[0])
    for table in tables:
        mysql='''describe honey.%s'''%(table)
        dbx.execute(mysql); rows=dbx.fetchall()
        psql='drop table %s'%(table)
        DC.execute(psql); DB.commit()
    
        psql='create table %s ('%(table)
        for row in rows:
            name=row[0]; type=row[1]
            if 'int' in type: type='int8'
            if 'blob' in type: type='bytea'
            if 'datetime' in type: type='timestamptz'
            psql+='%s %s,'%(name,type)
        psql=psql.strip(',')+')'
        print psql
        try: DC.execute(psql); DB.commit()
        except: pass
    
        msql='''select * from honey.%s'''%(table)
        dbx.execute(msql); rows=dbx.fetchall()
        n=len(rows); print n; t=n
        if n==0: continue #skip if no data
    
        cols=len(rows[0])
        for row in rows:
            ps=', '.join(['%s']*cols)
            psql='''insert into %s values(%s)'''%(table, ps)
            DC.execute(psql,(row))
            n=n-1
            if n%1000==1: DB.commit(); print n,t,t-n
        DB.commit()
    
    0 讨论(0)
  • 2020-12-04 10:22

    As with most database migrations, there isn't really a cut and dried solution.

    These are some ideas to keep in mind when doing a migration:

    1. Data types aren't going to match. Some will, some won't. For example, SQL Server bits (boolean) don't have an equivalent in Oracle.
    2. Primary key sequences will be generated differently in each database.
    3. Foreign keys will be pointing to your new sequences.
    4. Indexes will be different and probably will need tweaked.
    5. Any stored procedures will have to be rewritten
    6. Schemas. Mysql doesn't use them (at least not since I have used it), Postgresql does. Don't put everything in the public schema. It is a bad practice, but most apps (Django comes to mind) that support Mysql and Postgresql will try to make you use the public schema.
    7. Data migration. You are going to have to insert everything from the old database into the new one. This means disabling primary and foreign keys, inserting the data, then enabling them. Also, all of your new sequences will have to be reset to the highest id in each table. If not, the next record that is inserted will fail with a primary key violation.
    8. Rewriting your code to work with the new database. It should work but probably won't.
    9. Don't forget the triggers. I use create and update date triggers on most of my tables. Each db sites them a little different.

    Keep these in mind. The best way is probably to write a conversion utility. Have a happy conversion!

    0 讨论(0)
提交回复
热议问题