问题
I have a fairly large MySQL table (11.5 million rows). In terms of data size, the table is ~2GB.
My max_allowed_packet
is 64MB. I'm backing up the table using mysqldump
by creating a batch of inserts (500,000 values each), because the resulting sql file produced using the mysqldump option --skip-extended-insert
just takes too long to re-insert.
This is what I'm running (from a perl script):
`mysqldump -u root -pmypassword --no-data mydb mytable > mybackup.sql`
my $offset = 0;
while ($offset < $row_count) {
`mysqldump -u root -p[mypassword] --opt --no-create-info --skip-add-drop-table --where="1 LIMIT $offset, 500000" mydb mytable >> mybackup.sql`
}
The resulting sql file is 900MB. Check out the following output of grep -n '\-\- WHERE\: 1 LIMIT' mybackup.sql
:
80:-- WHERE: 1 LIMIT 0, 500000
158:-- WHERE: 1 LIMIT 500000, 500000
236:-- WHERE: 1 LIMIT 1000000, 500000
314:-- WHERE: 1 LIMIT 1500000, 500000
392:-- WHERE: 1 LIMIT 2000000, 500000
469:-- WHERE: 1 LIMIT 2500000, 500000
546:-- WHERE: 1 LIMIT 3000000, 500000
623:-- WHERE: 1 LIMIT 3500000, 500000
699:-- WHERE: 1 LIMIT 4000000, 500000
772:-- WHERE: 1 LIMIT 4500000, 500000
846:-- WHERE: 1 LIMIT 5000000, 500000
921:-- WHERE: 1 LIMIT 5500000, 500000
996:-- WHERE: 1 LIMIT 6000000, 500000
1072:-- WHERE: 1 LIMIT 6500000, 500000
1150:-- WHERE: 1 LIMIT 7000000, 500000
1229:-- WHERE: 1 LIMIT 7500000, 500000
1308:-- WHERE: 1 LIMIT 8000000, 500000
1386:-- WHERE: 1 LIMIT 8500000, 500000
1464:-- WHERE: 1 LIMIT 9000000, 500000
1542:-- WHERE: 1 LIMIT 9500000, 500000
1620:-- WHERE: 1 LIMIT 10000000, 500000
1697:-- WHERE: 1 LIMIT 10500000, 500000
1774:-- WHERE: 1 LIMIT 11000000, 500000
1851:-- WHERE: 1 LIMIT 11500000, 500000
...and the result of grep -c 'INSERT INTO ' mybackup.sql
is 923.
Each of those 923 insert statements is almost exactly 1MB each. Why is mysqldump producing so many insert statements for each command. I would have expected to only see 24 insert statements, but the command seems to be producing 38 inserts for each batch.
Is there something I can put in my.cnf or pass to mysqldump to stop it breaking the dump into inserts of 1MB increments?
mysql Ver 14.14 Distrib 5.5.44
mysqldump Ver 10.13 Distrib 5.5.44
I re-ran the job with the additional net_buffer_length=64M
option in the mysqldump commands. But I got Warning: option 'net_buffer_length': unsigned value 67108864 adjusted to 16777216
. I took a look in my.cnf
to see if there was anything set to 16M, and key_buffer
and query_cache_size
were. I set them both to 64M too and re-ran, but got the same warning.
The resulting dump file seems fine, and the insert statements are now ~16MB each. Is it possible to increase that even further? Is there an option capping the allowed buffer length?
I set the mysql net_buffer_length
variable in my.cnf
to 64M but, like the documentation says, it was set to it's max value which is 1048576 (1MB). But the net_buffer_length
option to mysqldump let me bring the max insert size up to 16MB (even though it was reduced from the requested 64MB).
I'm happy enough to go along with 16MB inserts, but I'd be interested in increasing that if I can.
Just one last thought. It seems like I'm completely wasting my time trying to do any kind of batching myself, because mysqldump will do exactly what I want itself by default. So if I just run:
mysqldump -u root -p[mypassword] --net_buffer_length=16M mydb mytable > mybackup.sql
...for any table, no matter how large, I never have to worry about the inserts being too big because mysqldump will never create one larger than 16MB.
I don't know what else --skip-extended-insert
could be needed for, but I can't imagine I'll have to use it again.
回答1:
mysqldump limits it's line length according to your my.ini settings, possibly on your client they are smaller than on your server. The option is net_buffer_length
.
Often you have the problem the other way round: On the big Server this option has a big value, and when you get lines with 512 MB in a row, you can not insert into the local database or the test database.
Option
Stolen from there:
To check the default value of this variable, use this: mysqldump --help | grep net_buffer_length
For me it was almost 1 MB (i.e. 1046528) and it produced enormous lines in the dump file. According to the 5.1 documentation the variable can be set between 1024 and 1048576. However for any value below 4096 it told me this: Warning: option 'net_buffer_length': unsigned value 4095 adjusted to 4096. So probably the minimum on my system was set to 4096.
Dumping with this resulted in a lot more sane SQL file: mysqldump --net_buffer_length=4096 --create-options --default-character-set="utf8" --host="localhost" --hex-blob --lock-tables --password --quote-names --user="myuser" "mydatabase" "mytable" > mytable.sql
来源:https://stackoverflow.com/questions/32634017/how-to-prevent-mysqldump-from-splitting-dumps-into-1mb-increments