Backend Development 10 min read

Why Switch from PHP to Go? Boosting Concurrency for Live Streaming

This article explains why backend developers are moving from PHP to Go, demonstrates how Go's built-in concurrency primitives like sync.WaitGroup and errgroup simplify high‑traffic live‑streaming services, and warns about common closure pitfalls when launching goroutines in loops.

Raymond Ops
Raymond Ops
Raymond Ops
Why Switch from PHP to Go? Boosting Concurrency for Live Streaming

1. Reasons to Choose Go

As a backend developer I mainly use PHP and Go. PHP feels comfortable and fast for simple tasks, but new projects after 2021 are increasingly written in Go. The key reasons for switching are:

1) PHP cannot meet our high‑concurrency live‑streaming workload. In the official php‑fpm model each request spawns a separate process, which is unsuitable for the massive concurrent connections of a live‑streaming service.

2) Go is popular among large tech companies. Companies such as Tencent, Baidu, Didi and others have migrated from PHP to Go, indicating a strong industry trend.

3) Go’s simplicity. Compared with Java, Go’s syntax is easy to pick up; I was able to start writing projects after only a couple of weeks of learning.

2. How Go Solves Concurrency Issues

In traditional PHP, handling a user entering a live‑room requires sequentially fetching version info, basic live info, user info, equity info, and statistics. The total latency equals the sum of all individual calls, which degrades user experience.

Serial PHP request flow
Serial PHP request flow

When rewritten in Go, the request latency becomes the duration of the longest individual operation, because all calls can run concurrently.

Concurrent Go request flow
Concurrent Go request flow

Method 1: sync.WaitGroup

<code>func main() {
    var (
        VersionDetail, LiveDetail, UserDetail, EquityDetail, StatisticsDetail int
    )
    ctx := context.Background()
    GoNoErr(ctx,
        func() { VersionDetail = 1; time.Sleep(1 * time.Second); fmt.Println("执行第一个任务") },
        func() { LiveDetail = 2; time.Sleep(2 * time.Second); fmt.Println("执行第二个任务") },
        func() { UserDetail = 3; time.Sleep(3 * time.Second); fmt.Println("执行第三个任务") },
        func() { EquityDetail = 4; time.Sleep(4 * time.Second); fmt.Println("执行第四个任务") },
        func() { StatisticsDetail = 5; time.Sleep(5 * time.Second); fmt.Println("执行第五个任务") },
    )
    fmt.Println(VersionDetail, LiveDetail, UserDetail, EquityDetail, StatisticsDetail)
}

// GoNoErr runs each function in its own goroutine and waits for all to finish.
func GoNoErr(ctx context.Context, functions ...func()) {
    var wg sync.WaitGroup
    for _, f := range functions {
        wg.Add(1)
        go func(fn func()) { fn(); wg.Done() }(f)
    }
    wg.Wait()
}
</code>

Method 2: errgroup.Group

<code>func main() {
    var (
        VersionDetail, LiveDetail, UserDetail, EquityDetail, StatisticsDetail int
        err error
    )
    ctx := context.Background()
    err = GoErr(ctx,
        func() error { VersionDetail = 1; time.Sleep(1 * time.Second); fmt.Println("执行第一个任务"); return nil },
        func() error { LiveDetail = 2; time.Sleep(2 * time.Second); fmt.Println("执行第二个任务"); return nil },
        func() error { UserDetail = 3; time.Sleep(3 * time.Second); fmt.Println("执行第三个任务"); return nil },
        func() error { EquityDetail = 4; time.Sleep(4 * time.Second); fmt.Println("执行第四个任务"); return nil },
        func() error { StatisticsDetail = 5; time.Sleep(5 * time.Second); fmt.Println("执行第五个任务"); return nil },
    )
    if err != nil { fmt.Println(err); return }
    fmt.Println(VersionDetail, LiveDetail, UserDetail, EquityDetail, StatisticsDetail)
}

func GoErr(ctx context.Context, functions ...func() error) error {
    var eg errgroup.Group
    for _, f := range functions {
        f := f // capture loop variable
        eg.Go(func() error { return f() })
    }
    return eg.Wait()
}
</code>

Both approaches let you split a parent task into multiple child goroutines, dramatically reducing overall latency.

Common Closure Pitfall

When launching goroutines inside a loop, capturing the loop variable directly causes all goroutines to reference the same variable, leading to unexpected results.

Incorrect pattern (write‑3):

<code>for _, f := range functions {
    eg.Go(func() error { return f() }) // f is the same variable for all iterations
}
</code>

Correct patterns:

<code>// Write‑1: use index and retrieve function each iteration
for i := range functions {
    f := functions[i]
    eg.Go(func() error { return f() })
}

// Write‑2: create a new variable inside the loop
for _, f := range functions {
    fs := f
    eg.Go(func() error { return fs() })
}
</code>

These variations ensure each goroutine captures a distinct function reference.

Images below illustrate the erroneous and expected outcomes.

Incorrect closure result
Incorrect closure result
Correct closure result
Correct closure result

Understanding this capture behavior is essential for writing reliable concurrent Go code.

In summary, Go’s native concurrency primitives—sync.WaitGroup and errgroup—provide simple, high‑performance ways to parallelize tasks in backend services, and careful handling of closures prevents subtle bugs when spawning goroutines in loops.

backendLive StreamingconcurrencyGoClosureerrgroupsync.WaitGroup
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.