Design and Implementation of a High‑Performance In‑Memory Cache in Go (MemoryCache)
This article analyzes the shortcomings of existing Go caching libraries, introduces the MemoryCache project, explains its hash‑based bucket design, 4‑ary heap LRU implementation, unified timer strategy, and provides practical usage examples with code snippets for SetWithCallback and GetOrCreateWithCallback.
The author, a veteran with 17 years in IT, outlines the need for a custom in‑memory cache that supports expiration, capacity limits, per‑item TTL, change callbacks, simple API, high performance, and unit testing.
After evaluating several Go cache projects (ttlcache/v2, bigcache, freecache, go‑cache) and finding them lacking in one or more required features, the author discovered the open‑source MemoryCache library and decided to adapt it.
MemoryCache Overview
MemoryCache stores entries in a hash‑based structure where the number of buckets is a power of two (2ⁿ). Each bucket contains a map , a lock, and a heap, enabling O(1) operations for insertion, deletion, and lookup.
Key Hashing Example
a := 13
fmt.Println(a & 3) // bitwise AND
fmt.Println(a % 3) // moduloThe author explains that bitwise AND is faster than modulo and is therefore used for hashing.
Bucket and Heap Design
Buckets are implemented as map + lock + heap . Using a heap (specifically a min‑heap) allows O(1) eviction of the oldest element without scanning the map. A 4‑ary heap is chosen to reduce the number of comparisons per operation.
The article raises three design questions:
Why use a heap for LRU?
What makes a heap implementation efficient?
How to provide a unified timer for elements with different TTLs?
Answers highlight that a min‑heap places the smallest (earliest‑expiring) element at the top, enabling constant‑time expiration checks, and that a reusable timer reduces GC pressure.
Quick Start
Installation
go get github.com/lxzan/memorycacheUsage Example
package main
import (
"fmt"
"github.com/lxzan/memorycache"
"time"
)
func main() {
mc := memorycache.New(
memorycache.WithBucketNum(128),
memorycache.WithBucketSize(1000, 10000),
memorycache.WithInterval(5*time.Second, 30*time.Second),
)
defer mc.Stop()
mc.Set("xxx", 1, 10*time.Second)
val, exist := mc.Get("xxx")
fmt.Printf("val=%v, exist=%v\n", val, exist)
time.Sleep(32 * time.Second)
val, exist = mc.Get("xxx")
fmt.Printf("val=%v, exist=%v\n", val, exist)
}Main Methods
SetWithCallback
Sets a value with an optional callback that is triggered on expiration, overflow, or deletion.
const (
ReasonExpired = Reason(0) // object timed out
ReasonOverflow = Reason(1) // capacity exceeded
ReasonDeleted = Reason(2) // object deleted
) mc.SetWithCallback("xxx", 1, 10*time.Second, func(ele *memorycache.Element, reason memorycache.Reason) {
if reason == memorycache.ReasonExpired {
fmt.Printf("key=%v, value=%v, reason=%v\n", ele.Key, ele.Value, reason)
}
// handle other reasons similarly
})GetOrCreateWithCallback
Retrieves an element or creates it if missing, also supporting a callback.
mc.GetOrCreateWithCallback("xxx", 1, 10*time.Second, func(ele *memorycache.Element, reason memorycache.Reason) {
// same handling as above
})Code Analysis – Minimal Quad Heap
The author extracts the Down method from the library, which recursively pushes an element down a 4‑ary heap to maintain heap order.
func (c *heap) Down(i, n int) {
var index1 = i<<2 + 1 // first child
if index1 >= n { return }
var index2 = i<<2 + 2
var index3 = i<<2 + 3
var index4 = i<<2 + 4
var j = -1
if index4 < n {
j = c.min(c.min(index1, index2), c.min(index3, index4))
} else if index3 < n {
j = c.min(c.min(index1, index2), index3)
} else if index2 < n {
j = c.min(index1, index2)
} else {
j = index1
}
if j >= 0 && c.Less(j, i) {
c.Swap(i, j)
c.Down(j, n)
}
}Conclusion
MemoryCache provides a compact, high‑performance in‑memory caching solution with rich features such as per‑item TTL, capacity limits, callbacks, and a unified timer, making it suitable for backend services and easy to integrate or extend.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.