问题
Possible Duplicate:
Speeding up mysql dumps and imports
mysqldump
is reasonably fast, but dumps of a medium-sized database (20-30 megs) take several minutes to load using mysql my_database < my_dump_file.sql
Are there some mysql settings I can tune to speed up the load? Is there a better way to load saved data?
I've experimented using the mysqlimport utility with CSV-based dumps. These load slightly--but not appreciably--faster. I'm tempted to just copy raw database files around, but that seems like a bad idea.
回答1:
maatkit - parallel dump
and
maatkit - parallel restore
Very fast.
回答2:
Assuming that you're using InnoDB...
I was in the situation of having a pile of existing mysqldump output files that I wanted to import in a reasonable time. The tables (one per file) were about 500MB and contained about 5,000,000 rows of data each. Using the following parameters I was able to reduce the insert time from 32 minutes to under 3 minutes.
innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT
You'll also need to have a reasonably large innodb_buffer_pool_size
setting.
Because my inserts were a one-off I reverted the settings afterwards. If you're going to keep using them long-term, make sure you know what they're doing.
I found the suggestion to use these settings on Cedric Nilly's blog and the detailed explanation for each of the settings can be found in the MySQL documentation.
回答3:
Make sure you are using the --opt
option to mysqldump when dumping. This will use bulk insert syntax, delay key updates, etc...
If you are ONLY using MyISAM tables, you can safely copy them by stopping the server, copying them to a stopped server, and starting that.
If you don't want to stop the origin server, you can follow this:
- Get a read lock on all tables
- Flush all tables
- Copy the files
- Unlock the tables
But I'm pretty sure your copy-to server needs to be stopped when you put them in place.
回答4:
Are you sure the data is sane, and there aren't any filesystem or system performance issues? Several minutes for a 20-30 meg database is a long time. I'm on a MacBook with 2GB of RAM, a 320GB HD and the standard 2.1GHz processor. I grabbed one of my databases for a quick benchmark:
gavinlaking$ du -sm 2009-07-12.glis
74 2009-07-12.glis
gavinlaking$ mysql -pxxx -e "drop database glis"
gavinlaking$ mysql -pxxx -e "create database glis"
gavinlaking$ time mysql -pxxx glis < 2009-07-12.glis
real 0m17.009s
user 0m2.021s
sys 0m0.301s
17 seconds for a 74 megabyte file. That seems pretty snappy to me. Even if it was 4 times bigger (making it just shy of 300 megabytes), it finishes in just under 70 seconds.
回答5:
Try out https://launchpad.net/mydumper - multi-threaded mysql backup/restore which is 3x to 10x times faster than mysqldump http://vbtechsupport.com/1695/
回答6:
There is an method for using LVM snapshots for backup and restore that might be an interesting option for you.
Instead of doing a mysqldump, consider using LVM to take snapshots of your MySQL data directories. Using LVM snapshots allow you to have nearly real time backup capability, support for all storage engines, and incredibly fast recovery. To quote from the link below,
"Recovery time is as fast as putting data back and standard MySQL crash recovery, and it can be reduced even further."
http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/
来源:https://stackoverflow.com/questions/1112069/is-there-a-faster-way-to-load-mysqldumps