Implementing a Goroutine Pool in Go for High-Concurrency Task Management
This article explains why Go's lightweight goroutines enable high concurrency, describes the need for a goroutine pool to control resource usage, outlines the pool's architecture with entry and job channels, and provides a complete Go implementation with example usage.
The system development requirements involve multitasking, and handling concurrency is essential; while many languages support concurrency, Go stands out because it can efficiently run hundreds of thousands of goroutines, unlike Java which struggles with far fewer threads.
Go implements concurrency through goroutines; the main function runs as the primary goroutine, and without spawning additional goroutines the program executes serially.
Serial execution is inefficient for high‑traffic websites; true parallelism is limited by CPU core count, so concurrency—where multiple tasks interleave execution—is the practical approach, with each goroutine executing a function.
Creating a separate goroutine for every task can exhaust system resources, so a goroutine pool is introduced to cap the number of active goroutines and improve overall efficiency.
The pool design consists of an entryChannel for incoming tasks, a jobChannel for ready‑to‑execute tasks, and a fixed number of worker goroutines that pull tasks from the job channel; a diagram of this architecture is shown in the original article.
Below is the complete Go implementation of the task and pool structures, including task creation, worker logic, and pool execution:
package main
import (
"fmt"
"time"
)
// Task definition
type Task struct {
fu func() error // a no‑arg function
}
func NewTask(f func() error) *Task {
t := Task{fu: f}
return &t
}
func (t *Task) Execute() {
_ = t.fu()
}
// Pool definition
type Pool struct {
EntryChannel chan *Task // external task entry
worker_num int // max number of workers
JobsChannel chan *Task // internal ready‑task queue
}
func NewPool(cap int) *Pool {
p := Pool{
EntryChannel: make(chan *Task),
worker_num: cap,
JobsChannel: make(chan *Task),
}
return &p
}
func (p *Pool) worker(work_ID int) {
for task := range p.JobsChannel {
task.Execute()
fmt.Println("worker ID ", work_ID, " executed task")
}
}
func (p *Pool) Run() {
// 1. Start fixed number of workers
for i := 0; i < p.worker_num; i++ {
go p.worker(i)
}
// 2. Forward tasks from entry to job channel
for task := range p.EntryChannel {
p.JobsChannel <- task
}
// 3. Close channels when done
close(p.JobsChannel)
close(p.EntryChannel)
}
func main() {
// Create a sample task
t := NewTask(func() error {
fmt.Println(time.Now())
return nil
})
// Create a pool with up to 3 workers
p := NewPool(3)
// Continuously feed the task into the pool
go func() {
for {
p.EntryChannel <- t
}
}()
// Start the pool
p.Run()
}The test output (illustrated in the article with screenshots) shows timestamps being printed continuously, confirming that the pool processes tasks concurrently while respecting the worker limit.
In conclusion, although Go can spawn a massive number of goroutines, proper management via channels, context, and a custom goroutine pool is crucial to avoid deadlocks and uncontrolled resource consumption, ensuring graceful shutdown and efficient execution.
360 Quality & Efficiency
360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.