Backend Development 18 min read

Understanding Go's sync.Pool: Implementation, Usage Scenarios, and Evolution from Go 1.12 to 1.13

This article explains what a sync.Pool object pool is, when it should be used, details the Go 1.12 implementation with its internal structures and algorithms, describes the enhancements introduced in Go 1.13, and analyzes the resulting performance improvements for high‑concurrency backend applications.

Xueersi Online School Tech Team
Xueersi Online School Tech Team
Xueersi Online School Tech Team
Understanding Go's sync.Pool: Implementation, Usage Scenarios, and Evolution from Go 1.12 to 1.13

In Go, a sync.Pool is a thread‑safe pool that caches temporary objects to reduce allocation overhead and GC pressure, making it valuable for high‑concurrency backend services.

Typical use cases include managing large numbers of short‑lived objects that can be reused across goroutines, such as buffers for fmt.Sprintf or byte slices for network I/O.

Go 1.12 implementation uses a per‑P local pool ( type Pool struct { local unsafe.Pointer; localSize uintptr; New func() interface{} } ) with a private slot and a shared slice protected by a mutex. The Get method first checks the private slot, then the shared slice tail, and finally falls back to p.New if needed. Put stores the object in the private slot or appends it to the shared slice.

type Pool struct {
    noCopy   noCopy
    local    unsafe.Pointer // per‑P pool array
    localSize uintptr
    New      func() interface{}
}

func (p *Pool) Get() interface{} { /* ... */ }
func (p *Pool) Put(x interface{}) { /* ... */ }

The pool is cleaned up during GC via poolCleanup , which zeroes out all local and shared slots to avoid retaining objects between collections.

Go 1.13 enhancements add a victim cache ( victim unsafe.Pointer; victimSize uintptr ) and replace the shared slice with a lock‑free doubly‑linked list ( poolChain ). Get now prefers the head of the local chain, and Put pushes to the head, reducing contention. The victim cache extends the object‑reuse window across two GC cycles.

type Pool struct {
    noCopy   noCopy
    local    unsafe.Pointer
    localSize uintptr
    victim   unsafe.Pointer // previous cycle cache
    victimSize uintptr
    New      func() interface{}
}

func (p *Pool) Get() interface{} { /* uses popHead */ }
func (p *Pool) Put(x interface{}) { /* uses pushHead */ }

Performance analysis shows that moving from tail‑based operations with mutexes (Go 1.12) to head‑based lock‑free structures (Go 1.13) significantly reduces latency for Get/Put and improves scalability under heavy load.

performanceconcurrencyGoruntimesync.PoolObject pool
Xueersi Online School Tech Team
Written by

Xueersi Online School Tech Team

The Xueersi Online School Tech Team, dedicated to innovating and promoting internet education technology.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.