How to Quickly Recover a Single MySQL Table with XtraBackup
This guide explains how to use Percona XtraBackup for fast single‑table recovery in both local full‑backup and remote streaming scenarios, detailing preparation steps, command examples, and essential configuration tweaks to minimize downtime and data loss.
Background
Database administrators often face accidental table deletions that require immediate recovery. Traditional full backups of large instances can take hours to copy, synchronize, and restore, causing significant downtime.
This article demonstrates how to leverage Percona XtraBackup to perform rapid single‑table recovery, reducing loss and restoration time.
Local Full‑Backup Recovery
Perform a regular XtraBackup to a local directory and run apply‑log . Then package each table file individually (e.g., dbname.tablename.tar ).
<code>## Backup data
innobackupex --user=users --password=xxxxxxx --socket=/xxx/mysql.sock \
--defaults-file=/xxxxx/my.cnf /data/backup_dir/mysql 2>> /var/log/mysqlbackup.log
## Apply log
innobackupex -apply-log /data/backup/mysql
## Package each table (add compression/encryption as needed)
for tablename in $tables; do tar -cvf dbname.tablename.tar dbname/$tablename; done</code>Remember to execute apply‑log before packaging.
During recovery, copy only the necessary base files and the packaged table files, extract them, and add replicate-wild-do-table=dbname.tablename to the MySQL configuration. This avoids transferring and decompressing unnecessary data.
Copy MySQL data directory files
Copy ibdata* files
Copy ib_logfile* files
Copy my.cnf
Copy XtraBackup log files
Copy the target table files and directory structure
Add replicate-wild-do-table=dbname.tablename to the config
Add skip-slave-start to prevent premature replication
Decrypt/decompress if needed, then start MySQL
Change to the desired recovery point or GTID
Ensure skip-slave-start is set to avoid unwanted replication during recovery.
Remote Streaming Backup Recovery
When local storage or bandwidth is limited, use XtraBackup’s --stream option (tar or xbstream). This example uses xbstream for efficient remote backup.
<code>## Stream backup directly to remote storage
innobackupex --user=users --password=xxxxxxx --socket=/xxx/mysql.sock \
--defaults-file=/xxxxx/my.cnf --stream=xbstream --compress \
/data/backup_dir/mysql 2>> /var/log/mysqlbackup.log |
ssh [email protected] "xbstream -x -C /data/backup_dir1/mysql/"</code>Recovery steps mirror the local method: copy required base files and the streamed table files, then apply the same configuration changes.
Copy MySQL data directory files
Copy ibdata* files
Copy ib_logfile* files
Copy my.cnf
Copy XtraBackup log files
Copy the target table files and directory structure
Add replicate-wild-do-table=dbname.tablename
Add skip-slave-start
Decompress if needed:
<code>innobackupex --decompress --parallel=8 /data/backup_dir/mysql</code>Run apply‑log :
<code>innobackupex -apply-log /data/backup/mysql</code>Change to the desired recovery point or GTID
Errors from apply‑log about missing tablespaces can be ignored because only the needed table files are restored; the rest remain in InnoDB’s internal dictionary.
Conclusion
Single‑table fast recovery dramatically reduces downtime by restoring only the essential data. Proper preparation—knowing backup locations, binlog positions, and GTIDs—is crucial for a smooth recovery process.
360 Zhihui Cloud Developer
360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.