Databases 6 min read

Two Approaches to Synchronize MySQL Data to Redis Cache: UDF Trigger and Binlog Parsing (Canal)

This article explains two methods for keeping MySQL data in sync with a Redis cache—using MySQL triggers combined with a UDF function and parsing MySQL binlog via Alibaba's Canal—detailing their principles, implementation steps, advantages, limitations, and practical deployment considerations.

IT Xianyu
IT Xianyu
IT Xianyu
Two Approaches to Synchronize MySQL Data to Redis Cache: UDF Trigger and Binlog Parsing (Canal)

The article introduces two practical solutions for synchronizing data from a MySQL database to a Redis cache, aiming to improve read performance by serving frequently accessed data directly from Redis.

Solution 1 (UDF) : A MySQL trigger monitors data‑modifying operations. When a client (e.g., a Node.js server) writes to MySQL, the trigger fires and calls a custom User‑Defined Function (UDF) that writes the same data into Redis, achieving real‑time synchronization. This approach suits read‑heavy, low‑write‑concurrency scenarios but incurs overhead on write‑intensive tables because triggers can degrade performance.

The article provides a demonstration case with screenshots of the MySQL table schema, the UDF source code, and the trigger definition, illustrating how the components interact.

Solution 2 (Binlog Parsing) : It leverages MySQL’s binary log (binlog) replication mechanism. The master writes all changes to the binlog; a replica (here, a custom process) reads the binlog, parses the events, and writes the extracted data into Redis, effectively treating Redis as a downstream replica. This method requires deep understanding of binlog formats (Statement/Row/Mixed) and can be complex to implement.

Alibaba’s open‑source project Canal is presented as a ready‑made framework for binlog parsing. Canal mimics a MySQL slave, receives the binlog stream, parses it into structured events, and provides a pipeline consisting of eventParser , eventSink , eventStore , and metaManager . Developers can plug in custom logic to consume the parsed data and write it into Redis.

The article also outlines Canal’s architecture with diagrams, explains the data flow (parse → sink → store → custom Redis writer), and notes that most of the heavy lifting (parse/sink) is handled by the framework.

Finally, an alternative “push‑first” approach is mentioned—writing data to Redis before persisting it to MySQL—but it is warned as unsafe because a Redis outage could lead to data loss.

redisMySQLBinlogCanalUDFDatabase ReplicationCache Synchronization
IT Xianyu
Written by

IT Xianyu

We share common IT technologies (Java, Web, SQL, etc.) and practical applications of emerging software development techniques. New articles are posted daily. Follow IT Xianyu to stay ahead in tech. The IT Xianyu series is being regularly updated.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.