improve speed of mysql import

前端 未结 9 1544
無奈伤痛
無奈伤痛 2021-01-31 14:26

I have large database of 22GB. I used to take backup with mysqldump command in a gzip format.

When i extract the gz file it produces the

相关标签:
9条回答
  • 2021-01-31 15:01

    Way 1: Disable the foreign keys as fakedrake suggested.

    SET AUTOCOMMIT = 0; SET FOREIGN_KEY_CHECKS=0

    Way 2: Use BigDump, it will chunk your mysqldump file and then import that. http://www.ozerov.de/bigdump/usage/

    Question: You said that you are uploading ? how you are importing your dump ? not directly from the server /command line?

    0 讨论(0)
  • 2021-01-31 15:01

    Get more RAM, get a faster processor, get an SSD for faster writes. Batch the inserts so they will run faster than a bunch of individual inserts. It's a huge file, and will take time.

    0 讨论(0)
  • 2021-01-31 15:02

    One thing you can do is

    SET AUTOCOMMIT = 0; SET FOREIGN_KEY_CHECKS=0
    

    And you can also play with the values

    innodb_buffer_pool_size
    innodb_additional_mem_pool_size
    innodb_flush_method
    

    in my.cnf to get you going but in general you should have a look at the rest of innodb parameters as well to see what best suits you.

    This is a problem I have had in the past I don't feel I have tackled completely but I hope I had pointed myself in this direction from the get go. Would have saved myself quite some time.

    0 讨论(0)
  • 2021-01-31 15:02

    Make sure you increase your "max_allowed_packet" variable to a large enough size. This will really help if you have a lot of text data. Using high performance hardware will surely improve the speed of importing data.

    mysql --max_allowed_packet=256M -u root -p < "database-file.sql"
    
    0 讨论(0)
  • 2021-01-31 15:06

    Doing a dump and restore in the manner described will mean MySQL has to completely rebuild indexes as the data is imported. It also has to parse the data each time.

    It would be much more efficient if you could copy data files in a format MySQL already understands. A good way of doing this is to use innobackupex from Percona

    (Open Source and distributed as part of XtraBackup available to download from here).

    This will take a snapshot of MyISAM tables, and for InnoDB tables it will copy the underlying files, then replay the transaction log against them to ensure a consistent state. It can do this from a live server with no downtime (I have no idea if that is a requirement of yours?)

    I suggest you read the documentation, but to take a backup in it's simplest form use:

    $ innobackupex --user=DBUSER --password=DBUSERPASS /path/to/BACKUP-DIR/
    $ innobackupex --apply-log /path/to/BACKUP-DIR/
    

    If the data is on the same machine, then innobackupex even has a simple restore command:

    $ innobackupex --copy-back /path/to/BACKUP-DIR
    

    There are many more options and different ways of actually doing the backup so I would really encourage you have a good read of the documentation before you begin.

    For reference to speed, our slow test server, which does about 600 IOPS can restore a 500 GB backup in about 4 hours using this method.

    Lastly: You mentioned what could be done to speed up importing. It's mostly going to depend on what the bottle neck is. Typically, import operations are I/O bound (you can test this by checking for io waits) and the way to speed that up is with faster disk throughput - either faster disks themselves, or more of them in unison.

    0 讨论(0)
  • 2021-01-31 15:06

    The method descirbed in [Vinbot's answer above][1] using LOAD DATA INFILE is how I bring in about 1 Gb every day for an analysis process on my local desktop (I don't have DBA or CREATE TABLErights on the server, but I do on my local mySQL).

    A new feature introduced in mySQL 8.0.17, the [mySQL Parallel Table Import Utility][2], takes it to the next level.

    An import of CSV tables that formerly took about 15 minutes (approx 1 Gb) now takes 5:30, on an Intel Core I7-6820HQ with a SATA SSD. When I added an nVME M.2 1Tb WD Black drive (bought for an old desktop but proved incompatible) and moved the mySQL installation to that drive, the time dropped to 4 min 15 sec.

    I define most of my indexes in table definitions prior to running the utility. THe loads are even faster without indexing, but post-load indexing ends up taking more total time. This makes sense, as the multi-core feature of the Parallel Loader extends to index creation.

    I also ALTER INSTANCE DISABLE INNODB REDO_LOG (introduced 8.0.21) in the parallel loader utility script. Heed the warning not to leave this off after finished with the bulk load. I did not reenable and ended up with corrupted instance (not just tables, but the whole instance). I keep double-write buffering off, always.

    The CPU monitor shows the utility fully utilizes all 8 cores.

    Once done with the parallel loader, it's back to single-threaded mySQL (for my linear set of analysis tasks, not multi-user). The new nVME cuts times by 10% or so. The utility saves me several minutes, every day.

    The utility allows you to manage buffer sizes and number of threads. I match the number of physical cores in my CPU (8) and that seems optimal. (I originally came to this thread looking for optimization tips on configuring the parallel loader). [1]: https://stackoverflow.com/a/29922299/5839677 [2]: https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-parallel-table.html

    0 讨论(0)
提交回复
热议问题