Analyzing Data Loss When Restoring MySQL Master from Replica Backup Using GTID
This article explains a MySQL replication experiment where a master is restored from an outdated backup, leading to missing GTID transactions due to server_id filtering, and provides analysis and solutions such as changing server_id or enabling replicate-same-server-id to prevent data loss.
In daily MySQL operations, backups are essential for creating slaves or recovering inconsistent instances, but after restoring a failed master and re‑adding it to the cluster some data may be lost.
Experiment : set up a master‑slave environment, create databases test1‑test3, take a logical backup with mysqldump -uroot -p123456 --single-transaction --master-data=2 --all-databases > dump.sql , write new data on the master, simulate a disk failure, promote the slave to master, and then restore the old master from the backup.
After the restoration the old master missed the transaction whose GTID is eefac7d8-2370-11e9-bfeb-000c29d74445:3 . The new master’s binlog contains the transaction, but the old master’s relay‑log does not because the IO thread filtered it out when the server_id in the binlog matched its own server_id.
Analysis : because the backup used did not contain the latest GTID set, the restored instance relied on the new master’s binlog to catch up. The replication filter based on identical server_id caused the missing transaction.
Conclusion : change the server_id of a restored instance so it does not clash with existing members, or enable --replicate-same-server-id (with appropriate settings) to prevent the filter, thereby avoiding data loss.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.