Performance Challenge Championship (PCC) – High‑Concurrency Like Service Competition Overview and Solutions
The PCC (Performance Challenge Championship) was a one‑day offline competition where engineers built a high‑throughput “like” service, explored architectures such as OpenResty‑Lua, micro‑services with Go, caching strategies, and used Tsung for stress testing, with winners showcased and code released on GitHub.
PCC (Performance Challenge Championship) is an event organized by the High Availability Architecture community to deepen engineers' understanding of high‑concurrency programming through an offline competition.
Participating engineers gain experience by completing a concrete technical goal, learning advanced architectural ideas, and receiving feedback from experienced judges on real‑world high‑throughput system requirements.
Competition Method
Implement a Facebook‑like "like" feature that prevents duplicate likes and returns an error on the second attempt.
Provide an isLike API to check whether the current user has liked a given object.
Expose the total like count for each object.
Offer a list of users who liked the object, with an optional priority for the current user's friends.
Data volume: 10 million new like objects per day and 300 k queries per second for the like counter.
Before the competition, the problem statement was released; many participants had a working prototype by the first night, using the second day mainly for optimization.
The venue was unusually quiet as contestants focused on code tuning and debugging. By evening, most code was ready and participants awaited the stress test.
Simulated data for the contest can be found at https://github.com/archnotes/PCC/tree/master/data .
Outstanding Submissions
Reference Implementation – Fang Yuan
Project: https://github.com/archnotes/PCC
Runner‑up – Qin Guo‑ri
Architecture: OpenResty + Pika. OpenResty handles high‑concurrency requests with Lua scripts; shared dictionaries cache data. Pika stores user data using ordered sets and hashes.
Project: https://github.com/qinguanri/demo_lua
Runner‑up – Xia Haifeng
Adopted a micro‑service architecture split into article , user , and action services. Communication via gRPC; asynchronous writes through NSQ; data stored in SQL + NoSQL cache (SSDB) with Protobuf for serialization.
Project: https://github.com/chideat/pcc
Runner‑up – Chen Gang
Cache design details:
Feed like counter: key like_count:feed_id , value stored as a string; INCR / DECR on like/unlike.
Like list: key like_list:feed_id , value as a list; RPUSH on like, SREM on unlike.
Friends list: key friends:uid , value as a list of friend IDs.
Friend‑like list: key like_friends:feed_id , obtained by intersecting like_list and friends .
Other‑users like list: key like_others:feed_id , obtained by set difference.
Performance optimizations include storing only the first 100 likes in cache and pre‑computing separate friend and non‑friend lists per feed.
Implementation stack: MySQL 5.7, Redis, Spring‑Boot.
Project: https://github.com/iqinghe/pcc-like
Second Prize – Huang Dongxu
Utilized a local cache (Chronicle‑Map) for high‑frequency access, wrapping it with a custom service to handle varying value sizes.
Project: https://github.com/c4pt0r/pcc
Second Prize – Tang Fulin
Employed Chronicle‑Map for off‑heap storage, avoiding GC pressure, with mmap persistence. Added a ListMapService to select appropriate map sizes based on data distribution.
Project: https://github.com/tangfl/chestnut
Stress Test Program
The competition used Tsung, an open‑source load testing tool capable of distributed testing across HTTP, WebDAV, SOAP, PostgreSQL, MySQL, LDAP, and XMPP. It can simulate hundreds of thousands of virtual users.
First Prize
The first prize remains open; due to limited time and large data sets, a definitive winner was not declared, but top teams are encouraged to continue optimizing their solutions.
Thanks
The high‑performance cloud platform for this challenge was provided by QingCloud, along with support from judges Liang Yupeng, Liu Qi, and Wang Yuanming.
For all source code, visit the PCC repository: https://github.com/archnotes/PCC
High Availability Architecture
Official account for High Availability Architecture.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.