问题
I have a database of about 6GB in size and it has a table with 12.6 million rows. I tried to export the database into a SQL dump by:
mysqldump -u root -p db_name > db_name.sql
When the command finishes, the exported SQL dump file is just about 2GB and the primary table got exported only about 1 million rows.
What could possibly be wrong?
回答1:
There is a 2GB filesize limit for some reason, the easiest way to get around this is using split
:
mysqldump ... | split -b 250m - filename.sql-
You can also compress the files like this:
mysqldump ... | gzip -9c | split -b 250m - filename.sql.gz-
To restore from a non-compressed file, do this:
cat filename.sql-* | mysql ...
For a compressed file:
cat filename.sql-* | zcat | mysql ...
Of course if you want a single file, you can then tar
the result.
Obviously you can replace the 250m
with a different size if you wish.
回答2:
Your filesystem probably is limited to 2gb files.
回答3:
It's happen because some SQL Dump have limited size for dumping data. you could not dump the database if it over the limit.
If you really want to do this you must compress the database.By using ZIP,GZIP,etc. Before dumping data.
回答4:
I had similar, though all the tables were exported up to a certain point.
I'd removed a column on which an old redundant View depended, and mysqldump quietly choked trying to 'export' the View
来源:https://stackoverflow.com/questions/5658175/mysqldump-doing-a-partial-backup-incomplete-table-dump