InnoDB takes over an hour to import 600MB file, MyISAM in a few minutes

前端 未结 4 1397
孤城傲影
孤城傲影 2020-12-12 09:55

I\'m currently working on creating an environment to test performance of an app; I\'m testing with MySQL and InnoDB to find out which can serve us best. Within this environm

相关标签:
4条回答
  • 2020-12-12 10:26

    I had issues doing a lot of bulk importing and recommend the accepted answer. I found you can also speed things up significantly by:

    1. Dropping all indexes (other than primary key), loading the data then re-adding indexes
    2. Checking your innodb_log_file_size * innodb_log_files_in_group is sufficient to avoid writing to disk in sub-second frequency

    Regarding #2 the defaults of 5M * 2 will not be enough on a modern system. For details see innodb_log_file_size and innodb_log_files_in_group

    0 讨论(0)
  • 2020-12-12 10:30

    I found the hard drive to be the bottleneck - old-fashioned disks are hopeless, SSD is okay-ish but still far from perfect. Importing to tmpfs and copying out the data is way faster, details: https://dba.stackexchange.com/a/89367/56667

    0 讨论(0)
  • 2020-12-12 10:33

    Have you tried starting a transaction at the outset and committing it at the end? From the question you linked: "Modify the Insert Data step to start a transaction at the start and to commit it at the end. You will get an improvement, I guarantee it."

    Remember that InnoDB is transactional, MyISAM is not. Transactional engines treat every statement as an individual transaction if you don't explicitly control the transaction. This can be costly.

    0 讨论(0)
  • 2020-12-12 10:45

    Did you try the Bulk Data Loading Tips from the InnoDB Performance Tuning Tips (especially the first one):

    • When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it with SET autocommit and COMMIT statements:

      SET autocommit=0;
      ... SQL import statements ...
      COMMIT;
      

      If you use the mysqldump option --opt, you get dump files that are fast to import into an InnoDB table, even without wrapping them with the SET autocommit and COMMIT statements.

    • If you have UNIQUE constraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:

      SET unique_checks=0;
      ... SQL import statements ...
      SET unique_checks=1;
      

      For big tables, this saves a lot of disk I/O because InnoDB can use its insert buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.

    • If you have FOREIGN KEY constraints in your tables, you can speed up table imports by turning the foreign key checks off for the duration of the import session:

      SET foreign_key_checks=0;
      ... SQL import statements ...
      SET foreign_key_checks=1;
      

      For big tables, this can save a lot of disk I/O.

    IMO, the whole chapter is worth the read.

    0 讨论(0)
提交回复
热议问题