I have written a program in C to parse large XML files and then create files with insert statements. Some other process would ingest the files into a MySQL database. This
I'd do at least these things according to this link:
Transactions eliminate the
INSERT, SYNC-TO-DISK
repetition phase and instead all the disk IO is performed when you COMMIT the transaction.
Raw text + GZip compressed stream ~= as much as 90% bandwidth saving in some cases.
INSERT INTO TableName(Col1,Col2) VALUES (1,1),(1,2),(1,3)
( Less text to send, shorter action )
MySQL with the standard table formats is wonderfully fast as long as it's a write-only table; so the first question is whether you are going to be updating or deleting. If not, don't go with innosys - there's no need for locking if you are just appending. You can truncate or rename the output file periodically to deal with table size.
Really depends on the engine. If you're using InnoDB, do use transactions (you can't avoid them - but if you use autocommit, each batch is implicitly in its own txn), but make sure they're neither too big or too small.
If you're using MyISAM, transactions are meaningless. You may achieve better insert speed by disabling and enabling indexes, but that is only good on an empty table.
If you start with an empty table, that's generally best.
LOAD DATA is a winner either way.
If you can't use LOAD DATA INFILE like others have suggested, use prepared queries for inserts.