I\'ve been working with importing large CSV files of data; usually less than 100,000 records. I\'m working with PHP and MySQL (InnoDB tables). I needed to use PHP to transform s
Please check this link:
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-transaction-management.html
InnoDB must flush the log to disk at each transaction commit if that transaction made modifications to the database. When each change is followed by a commit (as with the default autocommit setting), the I/O throughput of the storage device puts a cap on the number of potential operations per second.
Big transactions may affect performance during commit (check above)
Only in case of rollback, however it may be optimized using some settings (check the link)
My own little test in .Net (4 fields pr. records):
INSERT 1 record, no transaction:60 ms
INSERT 1 record, using transaction:158 ms
INSERT 200 records using transactions, commit after each record:17778 ms
INSERT 200 records using no transactions:4940 ms
INSERT 200 records using transactions, only commit after last record:4552 ms
INSERT 1000 records using transactions, only commit after last record:21795 ms
Client in Denmark, server in Belgium (Google cloud f1-micro).
I meant to put this in a comment but the formatting is not good....so here is my apology in advance ;-)