I\'m currently working on creating an environment to test performance of an app; I\'m testing with MySQL and InnoDB to find out which can serve us best. Within this environm
I had issues doing a lot of bulk importing and recommend the accepted answer. I found you can also speed things up significantly by:
innodb_log_file_size
* innodb_log_files_in_group
is sufficient to avoid writing to disk in sub-second frequencyRegarding #2 the defaults of 5M * 2 will not be enough on a modern system. For details see innodb_log_file_size and innodb_log_files_in_group
I found the hard drive to be the bottleneck - old-fashioned disks are hopeless, SSD is okay-ish but still far from perfect. Importing to tmpfs and copying out the data is way faster, details: https://dba.stackexchange.com/a/89367/56667
Have you tried starting a transaction at the outset and committing it at the end? From the question you linked: "Modify the Insert Data step to start a transaction at the start and to commit it at the end. You will get an improvement, I guarantee it."
Remember that InnoDB is transactional, MyISAM is not. Transactional engines treat every statement as an individual transaction if you don't explicitly control the transaction. This can be costly.
Did you try the Bulk Data Loading Tips from the InnoDB Performance Tuning Tips (especially the first one):
When importing data into
InnoDB
, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it withSET autocommit
andCOMMIT
statements:SET autocommit=0; ... SQL import statements ... COMMIT;
If you use the mysqldump option
--opt
, you get dump files that are fast to import into anInnoDB
table, even without wrapping them with theSET autocommit
andCOMMIT
statements.If you have
UNIQUE
constraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:SET unique_checks=0; ... SQL import statements ... SET unique_checks=1;
For big tables, this saves a lot of disk I/O because
InnoDB
can use its insert buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.If you have
FOREIGN KEY
constraints in your tables, you can speed up table imports by turning the foreign key checks off for the duration of the import session:SET foreign_key_checks=0; ... SQL import statements ... SET foreign_key_checks=1;
For big tables, this can save a lot of disk I/O.
IMO, the whole chapter is worth the read.