Design and Performance Evaluation of a Scalable Like System
This article analyzes common like‑system business scenarios, derives functional requirements, proposes three architectural solutions (Redis‑based priority queue, relational database, and a hybrid cache‑MQ‑DB design), presents detailed implementation code, and evaluates each approach with extensive performance tests to guide practical backend design choices.
Recently a requirement for a like system was received; recalling similar past projects, the core needs remain the same while the focus differs, prompting a systematic review of typical like‑system characteristics and a generic design.
Business Scenarios
Count‑centric displays where users are highly sensitive to count changes (e.g., Zhihu, Bilibili, Quora).
Timeline‑driven scenarios requiring both count and user‑change visibility (e.g., WeChat Moments, Weibo).
Highly extensible scenarios with low concurrency demands.
Common features across these scenarios include high‑frequency count access, strong availability and consistency requirements, user‑centric queries (history, common likers), and time‑sensitive ordering.
Derived Functional Requirements
Query total like count.
Check whether a specific user has liked.
Like / unlike operations.
Query like history.
Technical needs translate to implementing a counter, maintaining user‑item relationships, and recording timestamps.
Solution 1 – Priority Queue (Redis ZSet)
Use a unique key for each liked object to create a ZSet.
Store user IDs as ZSet members, with the like timestamp as the score.
For user‑centric queries, build a separate ZSet keyed by user.
Operations and complexities:
Get like count: zcard – O(1).
Check like status: zscore – O(1).
Like / unlike: zadd / zrem – O(log N).
Query history: zrangebyscore – O(log N + M).
Advantages: high availability, strong consistency, fast operations, easy horizontal scaling with Redis Cluster. Drawbacks: possible data loss acceptable for likes, memory consumption grows with data volume.
Solution 2 – Relational Database (MySQL)
Store subject ID, user ID, timestamp, and optional metadata in a table.
Create a unique index on (subject_id, user_id) and additional indexes for user‑centric and time‑based queries.
Operation complexities:
Like count: aggregate on subject index – O(log N + M).
Check like status: indexed lookup – O(log N).
Like / unlike: insert / delete – O(log N).
Query history: indexed range query – O(log N * log M).
Advantages: strong consistency, no data loss, ample storage for metadata. Drawbacks: lower concurrency, higher implementation cost for scaling.
Solution 3 – Cache + Message Queue + Relational Database (Redis + Kafka + MySQL)
Store total count in a Redis string using INCR for atomic increments.
Use a Redis bitmap (or Bloom filter) to record whether a user has liked; cancellation handled via a counting filter.
Publish each like/unlike event to Kafka for asynchronous persistence.
Consume Kafka messages to write detailed records into MySQL for low‑frequency queries (history, analytics).
Operation complexities:
Like count: GET – O(1).
Check like status: GETBIT – O(1) (fallback to DB if true).
Like / unlike: INCR/DECR + SETBIT + Kafka publish – O(1).
Query history: MySQL indexed query – O(log N * log M).
Advantages: combines high availability of Redis with strong consistency of MySQL, lower memory cost than pure Redis, good horizontal scalability. Drawbacks: increased implementation complexity and potential latency for recent likes due to asynchronous persistence.
Performance Testing
Hardware: Intel i5 2.7 GHz 4‑core, 16 GB RAM (≈7 GB used by system + threads). Tests covered sequential writes, concurrent writes, concurrent count queries, and concurrent status checks for each solution.
Redis (Solution 1) Results
Sequential write (1 × 10⁷ items): 7000 QPS, 1.16 GB memory.
Concurrent write (100 threads): 27 700 QPS.
Count query (100 threads): 31 600 QPS.
Status query (100 threads): 28 500 QPS.
MySQL (Solution 2) Results
Sequential write: ~667 QPS.
Concurrent write (100 threads): ~2 100 QPS.
Count query: ~1 500 QPS.
Status query: ~1 400 QPS.
Hybrid (Solution 3) Results
Sequential write: ~2 200 QPS.
Concurrent write: ~5 700 QPS.
Count query: ~33 800 QPS.
Status query: ~13 400 QPS.
Notes: Redis write throughput saturates around 10 concurrent workers; MySQL benefits from higher concurrency; Kafka is not a bottleneck.
Sample Implementation Code
public boolean upvote(Upvote upvote) {
// Increment total count
redisTemplate.opsForValue().increment("demo.community:upvote:count" + upvote.getTopicId());
// Compute Bloom filter offsets
int[] offsets = userHash(upvote.getUserId());
for (long offset : offsets) {
redisTemplate.opsForValue().setBit("demo.community:upvote:user:filter" + upvote.getTopicId(), offset, true);
}
// Asynchronously send to Kafka
kafkaTemplate.send("demo-community-vote", gson.toJson(upvote));
return true;
}
public boolean upVoted(Upvote upvote) {
int[] offsets = userHash(upvote.getUserId());
for (long offset : offsets) {
if (!Boolean.TRUE.equals(redisTemplate.opsForValue().getBit(UPVOTE_USER_FILTER_PREFIX + upvote.getTopicId(), offset))) {
return false; // No like record in cache
}
}
// Fallback to DB for definitive check
return upvoteMysqlDAO.findOne(Example.of(upvote)).isPresent();
}Conclusion & Recommendations
For most scenarios with up to tens of millions of likes, Solution 1 (Redis ZSet) offers the simplest implementation with excellent performance.
When data volume and growth rate are high, Solution 3 provides better scalability and consistency while keeping costs reasonable.
For low‑traffic applications (≤ 100 k users), Solution 2 (MySQL) is sufficient and cost‑effective.
Hybrid approaches can further reduce memory pressure by archiving cold data from Redis to MySQL and using Bloom filters to route queries.
Practical experience shows that real‑world testing often reveals gaps between design assumptions and actual behavior (e.g., Redis memory overhead, Bloom filter write amplification, connection‑pool tuning). Iterative testing and tuning remain essential.
YunZhu Net Technology Team
Technical practice sharing from the YunZhu Net Technology Team
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.