Backend Development 18 min read

Master Go Concurrency: Goroutines, Scheduler, Race Detection, and Channels Explained

This article walks through Go's concurrency model, detailing how goroutines are scheduled across logical processors, how to control parallelism with GOMAXPROCS, detect and resolve race conditions using atomic operations, mutexes, and channels, and provides practical code examples for each concept.

Raymond Ops
Raymond Ops
Raymond Ops
Master Go Concurrency: Goroutines, Scheduler, Race Detection, and Channels Explained

1. Running programs with goroutine

1.1 Go concurrency and parallelism

Go's concurrency ability lets a function run independently of others. When a

goroutine

is created, the runtime scheduler assigns it to an available logical processor (P) which is bound to an OS thread (M).

Manage all created goroutine and allocate execution time.

Bind OS threads to logical processors.

The scheduler works with three roles: M (OS thread), P (logical processor), and G (goroutine). Each P maintains a global runqueue of ready goroutine. When

go func

is called, the new goroutine is appended to the runqueue and later taken for execution.

Go scheduler diagram
Go scheduler diagram

If an OS thread M blocks (e.g., a system call), the scheduler can bind the P to another M, ensuring other goroutine continue to run. The default limit is 10,000 threads, adjustable via

runtime/debug.SetMaxThreads

.

Go can achieve concurrency on a single logical processor; true parallelism requires multiple logical processors.

Example setting a single logical processor:

<code>package main

import (
    "runtime"
    "sync"
    "fmt"
)

var wg sync.WaitGroup

func main() {
    runtime.GOMAXPROCS(1)
    wg.Add(2)
    fmt.Printf("Begin Coroutines\n")
    go func() {
        defer wg.Done()
        for count := 0; count < 3; count++ {
            for char := 'a'; char < 'a'+26; char++ {
                fmt.Printf("%c ", char)
            }
        }
    }()
    go func() {
        defer wg.Done()
        for count := 0; count < 3; count++ {
            for char := 'A'; char < 'A'+26; char++ {
                fmt.Printf("%c ", char)
            }
        }
    }()
    fmt.Printf("Waiting To Finish\n")
    wg.Wait()
}
</code>

Program output shows the first goroutine finishes before the second. To run them in parallel, set two logical processors:

<code>runtime.GOMAXPROCS(2)</code>

With only one logical processor, you can force alternating execution using

runtime.Gosched()

:

<code>package main

import (
    "runtime"
    "sync"
    "fmt"
)

var wg sync.WaitGroup

func main() {
    runtime.GOMAXPROCS(1)
    wg.Add(2)
    fmt.Printf("Begin Coroutines\n")
    go func() {
        defer wg.Done()
        for count := 0; count < 3; count++ {
            for char := 'a'; char < 'a'+26; char++ {
                if char == 'k' {
                    runtime.Gosched()
                }
                fmt.Printf("%c ", char)
            }
        }
    }()
    go func() {
        defer wg.Done()
        for count := 0; count < 3; count++ {
            for char := 'A'; char < 'A'+26; char++ {
                if char == 'K' {
                    runtime.Gosched()
                }
                fmt.Printf("%c ", char)
            }
        }
    }()
    fmt.Printf("Waiting To Finish\n")
    wg.Wait()
}
</code>

2. Handling race conditions

Concurrent programs often encounter unsynchronized access to shared resources, leading to race conditions when multiple goroutine read/write the same variable.

<code>package main

import (
    "sync"
    "runtime"
    "fmt"
)

var (
    counter int64
    wg      sync.WaitGroup
)

func addCount() {
    defer wg.Done()
    for count := 0; count < 2; count++ {
        value := counter
        runtime.Gosched()
        value++
        counter = value
    }
}

func main() {
    wg.Add(2)
    go addCount()
    go addCount()
    wg.Wait()
    fmt.Printf("counter: %d\n", counter)
}
</code>

Solutions:

Use atomic functions.

Use a mutex to protect the critical section.

Use channels for communication.

2.1 Detecting race conditions

Go provides the

-race

flag to detect data races:

go build -race example4.go ./example4
Race detection output
Race detection output

The tool reports the lines where the race occurs.

2.2 Using atomic functions

Atomic operations provide lock‑free synchronization for primitive types:

<code>package main

import (
    "sync"
    "runtime"
    "fmt"
    "sync/atomic"
)

var (
    counter int64
    wg      sync.WaitGroup
)

func addCount() {
    defer wg.Done()
    for count := 0; count < 2; count++ {
        atomic.AddInt64(&amp;counter, 1)
        runtime.Gosched()
    }
}

func main() {
    wg.Add(2)
    go addCount()
    go addCount()
    wg.Wait()
    fmt.Printf("counter: %d\n", counter)
}
</code>

Other useful atomic functions include

atomic.StoreInt64

and

atomic.LoadInt64

.

2.3 Using a mutex

Mutexes lock a critical section so only one goroutine can modify the shared variable at a time:

<code>package main

import (
    "sync"
    "runtime"
    "fmt"
)

var (
    counter int
    wg      sync.WaitGroup
    mutex   sync.Mutex
)

func addCount() {
    defer wg.Done()
    for count := 0; count < 2; count++ {
        mutex.Lock()
        value := counter
        runtime.Gosched()
        value++
        counter = value
        mutex.Unlock()
    }
}

func main() {
    wg.Add(2)
    go addCount()
    go addCount()
    wg.Wait()
    fmt.Printf("counter: %d\n", counter)
}
</code>

3. Sharing data with channels

Go follows the CSP model; channels enable goroutine communication without explicit locks.

<code>unbuffered := make(chan int)               // unbuffered channel for int
buffered := make(chan string, 10)          // buffered channel for string
buffered <- "hello world"                 // send
value := <-buffered                        // receive
</code>

Unbuffered channels synchronize sender and receiver; buffered channels store values up to their capacity.

3.1 Unbuffered channels

Example simulating a tennis match where two players exchange a ball via an unbuffered channel:

<code>package main

import (
    "sync"
    "fmt"
    "math/rand"
    "time"
)

var wg sync.WaitGroup

func player(name string, court chan int) {
    defer wg.Done()
    for {
        ball, ok := <-court
        if !ok {
            fmt.Printf("Player %s Won\n", name)
            return
        }
        if rand.Intn(100)%13 == 0 {
            fmt.Printf("Player %s Missed\n", name)
            close(court)
            return
        }
        fmt.Printf("Player %s Hit %d\n", name, ball)
        ball++
        court <- ball
    }
}

func main() {
    rand.Seed(time.Now().Unix())
    court := make(chan int)
    wg.Add(2)
    go player("candy", court)
    go player("luffic", court)
    court <- 1
    wg.Wait()
}
</code>
Tennis channel example
Tennis channel example

3.2 Buffered channels

Buffered channels can hold multiple values before a receiver reads them, reducing the need for strict synchronization.

Conclusion

Goroutine runs on a logical processor (P) which has its own OS thread (M) and runqueue.

Multiple goroutine can execute concurrently on a single P; true parallelism requires multiple logical processors.

Use the

go

keyword to launch a goroutine.

Race conditions appear when goroutine concurrently access the same resource.

Mutexes or atomic functions can prevent races.

Channels provide a safer, idiomatic way to share data between goroutine.

Unbuffered channels are synchronous; buffered channels are asynchronous.

concurrencyGoMutexgoroutineRace ConditionAtomicChannels
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.