In-depth Source Analysis of Go Channels (Go 1.14)
This article provides a comprehensive source‑level examination of Go's channel mechanism—including usage examples, the internal hchan data structure, creation, read/write operations, closing behavior, and common pitfalls—based on the Go 1.14 runtime implementation.
Channels are the primary communication primitive between goroutines in Go and are widely used throughout the language; this article analyses their source code (Go 1.14) to explain how they work and how to avoid common pitfalls.
1. Usage example
// Two types of channel creation
channelUnbuffered := make(chan int)
channelBuffered := make(chan int, 4)
// Asynchronously write data to the channels
go func() {
for i := 0; i <= 4; i++ {
channelUnbuffered <- i+10
channelBuffered <- i
}
// Close channels
close(channelUnbuffered)
close(channelBuffered)
}()
// Read data from the channels
for {
time.Sleep(2*time.Second)
select {
case v := <-channelUnbuffered:
fmt.Println("read from channelUnbuffered", v)
case v := <-channelBuffered:
fmt.Println("read from channelBuffered", v)
default:
fmt.Println("no data")
}
}The code writes to both unbuffered and buffered channels in a separate goroutine and reads from them in the main goroutine, demonstrating basic send, receive, and close operations.
2. Channel data structure
type hchan struct {
qcount uint // total number of elements in the channel
dataqsiz uint // channel capacity
buf unsafe.Pointer // storage address of elements
elemsize uint16 // size of each element, depends on the element type
closed uint32 // non‑zero if the channel is closed
elemtype *_type // element type at runtime
sendx uint // send index – next element to write in buf
recvx uint // receive index – next element to read from buf
recvq waitq // list of waiting receivers
sendq waitq // list of waiting senders
lock mutex
}The hchan struct holds the buffer, counters, and two wait queues ( recvq and sendq ) that store goroutine descriptors waiting to receive or send.
3. Channel creation
// Compute element size
mem, overflow := math.MulUintptr(elem.size, uintptr(size))
switch {
case mem == 0: // unbuffered channel
c = (*hchan)(mallocgc(hchanSize, nil, true))
c.buf = c.raceaddr()
case elem.ptrdata == 0:
// Allocate hchan and buffer in one step for small elements
c = (*hchan)(mallocgc(hchanSize+mem, nil, true))
c.buf = add(unsafe.Pointer(c), hchanSize)
default:
c = new(hchan)
c.buf = mallocgc(mem, elem, true)
}
c.elemsize = uint16(elem.size)
c.elemtype = elemThe runtime decides whether to allocate a single block (for small or zero‑size elements) or separate allocations, then initializes the element size and type.
4. Reading from a channel
func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool) {
// Fast‑path non‑blocking failure
if !block && (c.dataqsiz == 0 && c.sendq.first == nil ||
c.dataqsiz > 0 && atomic.Loaduint(&c.qcount) == 0) &&
atomic.Load(&c.closed) == 0 {
return
}
lock(&c.lock)
// Closed channel with no data
if c.closed != 0 && c.qcount == 0 {
unlock(&c.lock)
if ep != nil {
typedmemclr(c.elemtype, ep)
}
return true, false
}
// If a sender is waiting, hand off the value directly
if sg := c.sendq.dequeue(); sg != nil {
recv(c, sg, ep, func() { unlock(&c.lock) }, 3)
return true, true
}
// Buffered case: read from the buffer
if c.qcount > 0 {
qp := chanbuf(c, c.recvx)
if ep != nil {
typedmemmove(c.elemtype, ep, qp)
}
typedmemclr(c.elemtype, qp)
unlock(&c.lock)
return true, true
}
// Non‑blocking read with no data
if !block {
unlock(&c.lock)
return false, false
}
// Blocking read: enqueue and park the goroutine
c.recvq.enqueue(mysg)
gopark(chanparkcommit, unsafe.Pointer(&c.lock), waitReasonChanReceive, traceEvGoBlockRecv, 2)
return true, !closed
}The function handles fast‑path failures, closed‑channel reads, direct hand‑off from waiting senders, buffered reads, and the blocking case where the goroutine is parked.
5. Writing to a channel
func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool {
// Fast‑path non‑blocking failure
if !block && c.closed == 0 && ((c.dataqsiz == 0 && c.recvq.first == nil) ||
(c.dataqsiz > 0 && c.qcount == c.dataqsiz)) {
return false
}
lock(&c.lock)
// Panic if the channel is closed
if c.closed != 0 {
unlock(&c.lock)
panic(plainError("send on closed channel"))
}
// If a receiver is waiting, hand off the value directly
if sg := c.recvq.dequeue(); sg != nil {
send(c, sg, ep, func() { unlock(&c.lock) }, 3)
return true
}
// Buffered case: write into the buffer
if c.qcount < c.dataqsiz {
qp := chanbuf(c, c.sendx)
typedmemmove(c.elemtype, qp, ep)
unlock(&c.lock)
return true
}
// Non‑blocking send with no space
if !block {
unlock(&c.lock)
return false
}
// Blocking send: enqueue and park the goroutine
c.sendq.enqueue(mysg)
gopark(chanparkcommit, unsafe.Pointer(&c.lock), waitReasonChanSend, traceEvGoBlockSend, 2)
releaseSudog(mysg)
return true
}The send routine mirrors the receive logic, panicking on closed channels, performing fast‑path checks, handling direct hand‑off to waiting receivers, buffered writes, and the blocking case.
6. Closing a channel
func closechan(c *hchan) {
if c == nil {
panic(plainError("close of nil channel"))
}
lock(&c.lock)
if c.closed != 0 {
unlock(&c.lock)
panic(plainError("close of closed channel"))
}
c.closed = 1
var glist gList
// Wake all waiting receivers
for {
sg := c.recvq.dequeue()
if sg.elem != nil {
typedmemclr(c.elemtype, sg.elem)
sg.elem = nil
}
glist.push(gp)
}
// Wake all waiting senders
for {
sg := c.sendq.dequeue()
glist.push(gp)
}
unlock(&c.lock)
for !glist.empty() {
gp := glist.pop()
goready(gp, 3)
}
}Closing sets the closed flag, then wakes every goroutine blocked on send or receive, ensuring no goroutine remains parked on a closed channel.
7. Discussion of common issues
After a channel is closed, blocked receivers obtain the zero value while blocked senders panic.
Both send and receive queues never contain elements simultaneously; an unbuffered channel pairs a sender with a receiver directly.
For buffered channels, qcount > 0 means there are items to read and no waiting receivers; when qcount == 0 , the send queue may have waiting goroutines.
Channels must be properly initialized before use; otherwise reads may return errors or cause goroutine leaks.
The FIFO order is guaranteed by always reading from the buffer first and by the lock‑protected hand‑off between waiting senders and receivers.
The underlying buffer acts as a circular queue, with sendx and recvx wrapping around when they reach the buffer size.
These points illustrate how Go’s runtime ensures safe, ordered communication between goroutines while handling edge cases such as closure and uninitialized channels.
Beike Product & Technology
As Beike's official product and technology account, we are committed to building a platform for sharing Beike's product and technology insights, targeting internet/O2O developers and product professionals. We share high-quality original articles, tech salon events, and recruitment information weekly. Welcome to follow us.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.