improve speed of mysql import

前端 未结 9 1539
無奈伤痛
無奈伤痛 2021-01-31 14:26

I have large database of 22GB. I used to take backup with mysqldump command in a gzip format.

When i extract the gz file it produces the

相关标签:
9条回答
  • 2021-01-31 15:07

    There are a lot of parameters that are missing, to fully understand the reason for the problem. such as:

    1. MySQL version
    2. Disk type and speed
    3. Free memory on the server before you start MySQL server
    4. iostat output before and at the time of the mysqldump.
    5. What are the parameters that you use to create the dump file in the first place.

    and many more.

    So I'll try to guess that your problem is in the disks because I have 150 instances of MySQL that I manage with 3TB of data on one of them, and usually the disk is the problem

    Now to the solution:

    First of all - your MySQL is not configured for best performance.

    You can read about the most important settings to configure at Percona blog post: http://www.percona.com/blog/2014/01/28/10-mysql-settings-to-tune-after-installation/

    Especially check the parameters:

    innodb_buffer_pool_size 
    innodb_flush_log_at_trx_commit
    innodb_flush_method
    

    If your problem is the disk - reading the file from the same drive - is making the problem worse.

    And if your MySQL server starting to swap because it does not have enough RAM available - your problem becomes even bigger.

    You need to run diagnostics on your machine before and at the time of the restore procedure to figure that out.

    Furthermore, I can suggest you to use another technic to perform the rebuild task, which works faster than mysqldump.

    It is Percona Xtrabackup - http://www.percona.com/doc/percona-xtrabackup/2.2/

    You will need to create the backup with it, and restore from it, or rebuild from running server directly with streaming option.

    Also, MySQL version starting from 5.5 - InnoDB performs faster than MyISAM. Consider changing all your tables to it.

    0 讨论(0)
  • 2021-01-31 15:13

    I've had to deal with the same issue. I've found using mysqldump to output to a CSV file (like this):

    mysqldump -u [username] -p -t -T/path/to/db/directory [database] --fields-enclosed-by=\" --fields-terminated-by=,
    

    and then importing that data using the LOAD DATA INFILE query from within the mysql client (like this):

    LOAD DATA FROM INFILE /path/to/db/directory/table.csv INTO TABLE FIELDS TERMINATED BY ',';
    

    to be about an order of magnitude faster than just executing the SQL queries containing the data. Of course, it's also dependent on the tables being already created (and empty).

    You can of course do that as well by exporting and then importing your empty schema first.

    0 讨论(0)
  • 2021-01-31 15:13

    I'm not sure its an option for you, but the best way to go about this is what Tata and AndySavage already said: to take a snapshot of the data files from the production server and then install them on your local box by using Percona's innobackupex. It will backup InnoDb tables in a consistent way and perform a write lock on MyISAM tables.

    Prepare a full backup on the production machine:

    http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/preparing_a_backup_ibk.html

    Copy (or pipe via SSH while making the backup - more info here) the backed up files to your local machine and restore them:

    Restore the backup:

    http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/restoring_a_backup_ibk.html

    You can find the full documentation of innobackupex here: http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/innobackupex_script.html

    The restoration time will be MUCH faster than reading an SQL dump.

    0 讨论(0)
提交回复
热议问题