Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

前端 未结 4 1401
孤独总比滥情好
孤独总比滥情好 2020-12-29 07:43

A long time ago on a system far, far away...

Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how

相关标签:
4条回答
  • 2020-12-29 08:06

    What I usually do for such migrations is two-fold:

    • Extract the whole database definition from MySQL and adapt it to PostgreSQL syntax.
    • Go over the database definition and transform it to take advantage of functionality in PostgreSQL that doesn't exist in MySQL.

    Then do the conversion, and write a program in whatever language you are most comfortable with that accomplishes the following:

    • Reads the data from the MySQL database.
    • Performs whatever transformation is necessary on the data to be stored in the PostgreSQL database.
    • Saves the now-transformed data in the PostgreSQL database.

    Redesign the tables for PostgreSQL to take advantage of its features.

    If you just do something like use a sed script to convert the SQL dump from one format to the next, all you are doing is putting a MySQL database in a PostgreSQL server. You can do that, and there will still be some benefit from doing so, but if you're going to migrate, migrate fully.

    It will involve a little bit more up-front time spent, but I have yet to come across a situation where it isn't worth it.

    0 讨论(0)
  • 2020-12-29 08:08

    Check out etlalchemy. It allows you migrate from MySQL to PostgreSQL, or between several other databases, in 4 lines of Python. You can read more about it here.

    To install: pip install etlalchemy

    To run:

    from etlalchemy import ETLAlchemySource, ETLAlchemyTarget
    # Migrate from MySQL to PostgreSQL
    src = ETLAlchemySource("mysql://user:passwd@hostname/dbname")
    tgt = ETLAlchemyTarget("postgresql://user:passwd@hostname/dbname",
                              drop_database=True)
    tgt.addSource(src)
    tgt.migrate()
    
    0 讨论(0)
  • 2020-12-29 08:10

    Convert the mysqldump file to a PostgreSQL-friendly format

    Convert the data as follows (do not use mysql2pgsql.perl):

    1. Escape the quotes.

      sed "s/\\\'/\'\'/g" climate-my.sql | sed "s/\\\r/\r/g" | sed "s/\\\n/\n/g" > escaped-my.sql

    2. Replace the USE "climate"; with a search path and comment the comments:

      sed "s/USE \"climate\";/SET search_path TO climate;/g" escaped-my.sql | sed "s/^\/\*/--/" > climate-pg.sql

    3. Connect to the database.

      sudo su - postgres
      psql climate

    4. Set the encoding (mysqldump ignores its encoding parameter) and then execute the script.

      \encoding iso-8859-1
      \i climate-pg.sql

    This series of steps will probably not work for complex databases with many mixed types. However, it works for integers, varchars, and floats.

    Indexes, primary keys, and sequences

    Since mysqldump included the primary keys when generating the INSERT statements, they will trump the table's automatic sequence. The sequences for all tables remained 1 upon inspection.

    Set the sequence after import

    Using the ALTER SEQUENCE command will set them to whatever value is needed.

    Schema Prefix

    There is no need to prefix tables with the schema name. Use:

    SET search_path TO climate;
    
    0 讨论(0)
  • 2020-12-29 08:31

    If you've converted a schema then migrating data would be the easy part:

    • dump schema from PostgreSQL (you said that you've converted schema to postgres, so we will dump it for now, as we will be deleting and recreating target database, to have it cleaned):

      pg_dump dbname > /tmp/dbname-schema.sql
      
    • split schema to 2 parts — /tmp/dbname-schema-1.sql containing create table statements, /tmp/dbname-schema-2.sql — the rest. PostgreSQL needs to import data before foreign keys, triggers etc. are imported, but after table definitions are imported.

    • recreate database with only 1 part of schema:

      drop database dbname
      create database dbname
      \i /tmp/dbname-schema-1.sql
      -- now we have tables without data, triggers, foreign keys etc.
      
    • import data:

      (
         echo 'start transaction';
         mysqldump --skip-quote-names dbname | grep ^INSERT;
         echo 'commit'
      ) | psql dbname
      -- now we have tables with data, but without triggers, foreign keys etc.
      

      A --skip-quote-names option is added in MySQL 5.1.3, so if you have older version, then install newer mysql temporarily in /tmp/mysql (configure --prefix=/tmp/mysql && make install should do) and use /tmp/mysql/bin/mysqldump.

    • import the rest of schema:

      psql dbname
      start transaction
      \i /tmp/dbname-schema-2.sql
      commit
      -- we're done
      
    0 讨论(0)
提交回复
热议问题