How do I backup a MySQL database?

前端 未结 5 835
日久生厌
日久生厌 2021-01-03 05:42

What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?

相关标签:
5条回答
  • 2021-01-03 05:44

    Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.

    The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).

    I recommend using a hot backup tool, like what is available in XtraBackup: http://www.percona.com/docs/wiki/percona-xtrabackup:start

    0 讨论(0)
  • 2021-01-03 05:47

    Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.

    0 讨论(0)
  • 2021-01-03 05:55

    Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.

    Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.

    If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.

    0 讨论(0)
  • 2021-01-03 06:04

    Depending on your requirements, there's several options that I have been using myself:

    • if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
    • if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
    • if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
    • if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.

    Methods I haven't tried, but that make sense to me:

    • if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)

    Additionally:

    • you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
    • I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
    0 讨论(0)
  • 2021-01-03 06:05

    I think mysqldump is the proper way of doing it.

    0 讨论(0)
提交回复
热议问题