Backup MySQL Amazon RDS

后端 未结 6 1987
遇见更好的自我
遇见更好的自我 2021-02-01 06:16

I am trying to setup Replica outside of AWS and master is running at AWS RDS. And I do not want any downtime at my master. So I setup my slave node and now I want to backup my c

相关标签:
6条回答
  • 2021-02-01 06:26

    for RDS binlog position you can use mydumper with --lock-all-tables, it will use LOCK TABLES ... READ just to get the binlog coordinates and then realease it instead of FTWRL.

    0 讨论(0)
  • 2021-02-01 06:27

    Michael's answer is extremely helpful and focuses on the main sticking point: you simply cannot GRANT the required SUPER privilege on RDS, and therefore you can't use the --master-data flag that would make things so much easier.

    I read that it may be possible to work around this by creating or modifying a Database Parameter Group via the API, but I think using the RDS procedures is a better option.

    The multi-tiered replication approach works well, though, and can include tiers outside RDS/VPC so it's possible to replicate from "Classic" EC2 to VPC using this method.

    A lot of the necessary functionality is only in later releases of MySQL 5.5 and 5.6, and I strongly recommend you run the same version on all the DBs involved in the replication stack, so you may have to do an upgrade of your old DB before all of this, which means yet more tedium and replication and so on.

    0 讨论(0)
  • 2021-02-01 06:27

    I had faced a similar problem a quick workaround to this is:

    1. Create a EBS Volume to have an extra space or extend current EBS volume on EC2. (or if you have an extra space you can use that).

    2. Use mysqldump command without --master-data or --flush-data directive to generate a complete (FULL) backup of db.

      mysqldump -h hostname --routines -uadmin -p12344 test_db > filename.sql

    admin is DB name and 12344 is the password

    Above is for taking backup of one single DB, if required to take all DBs then specify --all-databases and also mention DB Names.

    1. Create a Cron of this command to run once a day that will automatically generate the dump.

    Please note that this will incur an extra cost if your DB Size is huge. as it creates a complete DB dump.

    hope this helps

    0 讨论(0)
  • 2021-02-01 06:28

    Either things have changed, since @Michael - sqlbot 's response, or there is a misunderstanding going on here (could be on my part),

    You can use COPY to import a csv file into rds, at least on the postgres version, you just need to use FROM STDIN instead of directly naming the file,
    which means you end up piping things like:

    cat data.csv | psql postgresql://server:5432/mydb -U user -c "COPY \"mytable\" FROM STDIN DELIMITER ',' "
    
    0 讨论(0)
  • 2021-02-01 06:49

    Thanks Michael, I think the most correct solution and the recommended by AWS is do the replication using a read replica as a source as explained here.

    Having a RDS master, RDS read replica and an instance with MySQL ready, the steps to get an external slave are:

    1. On master, increase binlog retention period.

    mysql> CALL mysql.rds_set_configuration('binlog retention hours', 12);

    1. On read replica stop replication to avoid changes during the backup.

    mysql> CALL mysql.rds_stop_replication;

    1. On read replica annotate the binlog status (Master_Log_File and Read_Master_Log_Pos)

    mysql> SHOW SLAVE STATUS;

    1. On server instance do a backup and import it (Using mydumper as suggested by Max can speed up the process).

    mysqldump -h RDS_READ_REPLICA_IP -u root -p YOUR_DATABASE > backup.sql

    mysql -u root -p YOUR_DATABASE < backup.sql

    1. On server instance set it as slave of RDS master.

    mysql> CHANGE MASTER TO MASTER_HOST='RDS_MASTER_IP',MASTER_USER='myrepladmin', MASTER_PASSWORD='pass', MASTER_LOG_FILE='mysql-bin-changelog.313534', MASTER_LOG_POS=1097;

    Relace MASTER_LOG_FILE and MASTER_LOG_POS to the values of Master_Log_File Read_Master_Log_Pos you saved before, also you need an user in RDS master to be used by slave replication.

    mysql> START SLAVE;

    1. On server instance check if replication was success.

    mysql> SHOW SLAVE STATUS;

    1. On RDS read replica resume replication. mysql> CALL mysql.rds_start_replication;
    0 讨论(0)
  • 2021-02-01 06:53

    RDS does not allow even the master user the SUPER privilege, and this is required in order to execute FLUSH TABLES WITH READ LOCK. (This is an unfortunate limitation of RDS).

    The failing statement is being generated by the --master-data option, which is, of course, necessary if you want to be able to learn the precise binlog coordinates where the backup begins. FLUSH TABLES WITH READ LOCK acquires a global read lock on all tables, which allows mysqldump to START TRANSACTION WITH CONSISTENT SNAPSHOT (as it does with --single-transaction) and then SHOW MASTER STATUS to obtain the binary log coordinates, after which it releases the global read lock because it has a transaction that will keep the visible data in a state consistent with that log position.

    RDS breaks this mechanism by denying the SUPER privilege and providing no obvious workaround.

    There are some hacky options available to properly work around this, none of which may be particularly attractive:

    • do the backup during a period of low traffic. If the binlog coordinates have not changed between the time you start the backup and after the backup has begin writing data to the output file or destination server (assuming you used --single-transaction) then this will work because you know the coordinates didn't change while the process was running.

    • observe the binlog position on the master right before starting the backup, and use these coordinates with CHANGE MASTER TO. If your master's binlog_format is set to ROW then this should work, though you will likely have to skip past a few initial errors, but should not have to subsequently have any errors. This works because row-based replication is very deterministic and will stop if it tries to insert something that's already there or delete something that's already gone. Once past the errors, you will be at the true binlog coordinates where the consistent snapshot actually started.

    • as in the previous item, but, after restoring the backup try to determine the correct position by using mysqlbinlog --base64-output=decode-rows --verbose to read the master's binlog at the coordinates you obtained, checking your new slave to see which of the events must have already been executed before the snapshot actually started, and using the coordinates determined this way to CHANGE MASTER TO.

    • use an external process to obtain a read lock on each and every table on the server, which will stop all writes; observe that the binlog position from SHOW MASTER STATUS has stopped incrementing, start the backup, and release those locks.

    If you use any of these approaches other than perhaps the last one, it's especially critical that you do table comparisons to be certain your slave is identical to the master once it is running. If you hit subsequent replication errors... then it wasn't.

    Probably the safest option -- but also maybe the most annoying since it seems like it should not be necessary -- is to begin by creating an RDS read replica of your RDS master. Once it is up and synchronized to the master, you can stop replication on the RDS read replica by executing an RDS-provided stored procedure, CALL mysql.rds_stop_replication which was introduced in RDS 5.6.13 and 5.5.33 which doesn't require the SUPER privilege.

    With the RDS replica slave stopped, take your mysqldump from the RDS read replica, which will now have an unchanging data set on it as of a specific set of master coordinates. Restore this backup to your off-site slave and then use the RDS read replica's master coordinates from SHOW SLAVE STATUS Exec_Master_Log_Pos and Relay_Master_Log_File as your CHANGE MASTER TO coordinates.

    The value shown in Exec_Master_Log_Pos on a slave is the start of the next transaction or event to be processed, and that's exactly where your new slave needs to start reading on the master.

    Then you can decommission the RDS read replica once your external slave is up and running.

    0 讨论(0)
提交回复
热议问题