How to re-sync the Mysql DB if Master and slave have different database incase of Mysql replication?

前端 未结 14 915
小蘑菇
小蘑菇 2020-11-29 14:20

Mysql Server1 is running as MASTER.
Mysql Server2 is running as SLAVE.

Now DB replication is happeni

相关标签:
14条回答
  • 2020-11-29 14:47

    Unless you are writing directly to the slave (Server2) the only problem should be that Server2 is missing any updates that have happened since it was disconnected. Simply restarting the slave with "START SLAVE;" should get everything back up to speed.

    0 讨论(0)
  • 2020-11-29 14:48

    I am very late to this question, however I did encounter this problem and, after much searching, I found this information from Bryan Kennedy: http://plusbryan.com/mysql-replication-without-downtime

    On Master take a backup like this:
    mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A > ~/dump.sql

    Now, examine the head of the file and jot down the values for MASTER_LOG_FILE and MASTER_LOG_POS. You will need them later: head dump.sql -n80 | grep "MASTER_LOG"

    Copy the "dump.sql" file over to Slave and restore it: mysql -u mysql-user -p < ~/dump.sql

    Connect to Slave mysql and run a command like this: CHANGE MASTER TO MASTER_HOST='master-server-ip', MASTER_USER='replication-user', MASTER_PASSWORD='slave-server-password', MASTER_LOG_FILE='value from above', MASTER_LOG_POS=value from above; START SLAVE;

    To check the progress of Slave: SHOW SLAVE STATUS;

    If all is well, Last_Error will be blank, and Slave_IO_State will report “Waiting for master to send event”. Look for Seconds_Behind_Master which indicates how far behind it is. YMMV. :)

    0 讨论(0)
  • 2020-11-29 14:55

    The documentation for this at the MySQL site is woefully out of date and riddled with foot-guns (such as interactive_timeout). Issuing FLUSH TABLES WITH READ LOCK as part of your export of the master generally only makes sense when coordinated with a storage/filesystem snapshot such as LVM or zfs.

    If you are going to use mysqldump, you should rely instead on the --master-data option to guard against human error and release the locks on the master as quickly as possible.

    Assume the master is 192.168.100.50 and the slave is 192.168.100.51, each server has a distinct server-id configured, the master has binary logging on and the slave has read-only=1 in my.cnf

    To stage the slave to be able to start replication just after importing the dump, issue a CHANGE MASTER command but omit the log file name and position:

    slaveserver> CHANGE MASTER TO MASTER_HOST='192.168.100.50', MASTER_USER='replica', MASTER_PASSWORD='asdmk3qwdq1';
    

    Issue the GRANT on the master for the slave to use:

    masterserver> GRANT REPLICATION SLAVE ON *.* TO 'replica'@'192.168.100.51' IDENTIFIED BY 'asdmk3qwdq1';
    

    Export the master (in screen) using compression and automatically capturing the correct binary log coordinates:

    mysqldump --master-data --all-databases --flush-privileges | gzip -1 > replication.sql.gz
    

    Copy the replication.sql.gz file to the slave and then import it with zcat to the instance of MySQL running on the slave:

    zcat replication.sql.gz | mysql
    

    Start replication by issuing the command to the slave:

    slaveserver> START SLAVE;
    

    Optionally update the /root/.my.cnf on the slave to store the same root password as the master.

    If you are on 5.1+, it is best to first set the master's binlog_format to MIXED or ROW. Beware that row logged events are slow for tables which lack a primary key. This is usually better than the alternative (and default) configuration of binlog_format=statement (on master), since it is less likely to produce the wrong data on the slave.

    If you must (but probably shouldn't) filter replication, do so with slave options replicate-wild-do-table=dbname.% or replicate-wild-ignore-table=badDB.% and use only binlog_format=row

    This process will hold a global lock on the master for the duration of the mysqldump command but will not otherwise impact the master.

    If you are tempted to use mysqldump --master-data --all-databases --single-transaction (because you only using InnoDB tables), you are perhaps better served using MySQL Enterprise Backup or the open source implementation called xtrabackup (courtesy of Percona)

    0 讨论(0)
  • 2020-11-29 14:55

    We are using master-master replication technique of MySQL and if one MySQL server say 1 is removed from the network it reconnects itself after the connection are restored and all the records that were committed in the in the server 2 which was in the network are transferred to the server 1 which has lost the connection after restoration. Slave thread in the MySQL retries to connect to its master after every 60 sec by default. This property can be changed as MySQL ha a flag "master_connect_retry=5" where 5 is in sec. This means that we want a retry after every 5 sec.

    But you need to make sure that the server which lost the connection show not make any commit in the database as you get duplicate Key error Error code: 1062

    0 讨论(0)
  • 2020-11-29 14:56

    I think, Maatkit utilits helps for you! You can use mk-table-sync. Please see this link: http://www.maatkit.org/doc/mk-table-sync.html

    0 讨论(0)
  • 2020-11-29 14:56

    Following up on David's answer...

    Using SHOW SLAVE STATUS\G will give human-readable output.

    0 讨论(0)
提交回复
热议问题