Game Development 19 min read

C++20 Coroutines and Execution: Building an Asynchronous Game Server Framework

The article explains how C++20 coroutines and the emerging execution framework are used to redesign a single‑threaded game‑server architecture, detailing custom async wrappers, integration with asio and libunifex, job‑type scheduling, example coroutine code, current limitations, and a roadmap toward a simpler, sender/receiver‑based async library.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
C++20 Coroutines and Execution: Building an Asynchronous Game Server Framework

The article introduces the new C++20 coroutine feature and the still‑in‑proposal execution framework, explaining how they provide fresh approaches to asynchronous programming in C++. It outlines a series of technical talks that will cover basic principles, wrapper implementations, analysis of third‑party libraries, and practical applications within the author’s own framework.

It starts with the motivation: the author’s previous C++ game‑server framework was built around C++20 coroutines because the server runs mostly on a single main thread with limited multithreading needs. When the framework needed to serve both front‑end and back‑end code, the original coroutine‑based scheduler proved insufficient, prompting a redesign that leverages newer C++ async concepts.

The series will focus on three main components:

Custom asynchronous implementation of the author’s framework using C++20 coroutines.

asio – a widely used open‑source networking library that provides post and strand functionality.

libunifex – the closest practical implementation of the sender/receiver execution proposal, compatible with C++17/20.

These libraries form the foundation for exploring C++ asynchronous techniques, ultimately aiming for a simple‑to‑use yet efficient async library for business logic.

Framework Async Overview

The rstudio framework’s async system consists of two relatively independent parts:

Portions derived from older asio post/strand implementations, with added business‑level objects such as Fence.

A main‑thread coroutine scheduler, originally a stackless C++17 version and later rebuilt with C++20 coroutine support after GCC 11.1.

Job System Types

The framework defines several job types to categorize work:

enum class JobSystemType : int {
  kLogicJob = 0,       // logic thread (main thread)
  kWorkJob,            // work thread pool
  kSlowJob,            // IO‑dedicated thread pool
  kNetworkJob,         // dedicated network thread
  kNetworkConnectJob,  // extra network connect thread
  kLogJob,             // logging thread
  kNotifyExternalJob,  // external notification thread
  kTotalJobTypes,
};

Each type is described in detail, e.g., kLogicJob runs tasks on the main logic thread, kWorkJob handles small, controllable tasks in a thread pool, and kSlowJob is reserved for IO‑heavy operations.

Example Usage

Posting a task to the job system:

GJobSystem->Post([]() {
    // some calculate task here
    GJobSystem->Post([]() {
        // task notify code here
    }, rstudio::JobSystemType::kLogicJob);
}, rstudio::JobSystemType::kWorkJob);

Creating a timer job:

uint64_t JobSystemModule::AddAlwaysRunJob(JobSystemType jobType,
            threads::ThreadJobFunction&& periodJob,
            unsigned long periodTimeMs);

Requesting a strand for ordered execution:

auto strand = GJobSystem->RequestStrand(rstudio::JobSystemType::kWorkJob);
strand.Post([](){ /* part1 */ });
strand.Post([](){ /* part2 */ });
// ... further parts

Comparison with Halo Infinite’s Job System

The article compares the author’s framework with the job system presented in Halo Infinite’s GDC talk. Both use a graph‑based dependency model, but Halo’s system relies on explicit job graphs and sync points, while the author’s framework achieves similar behavior using asio’s strand and timer mechanisms.

Halo’s job creation example:

JobSystem& jobSystem = JobSystem::Get();
JobGraphHandle graphHandle = jobSystem.CreateJobGraph();
JobHandle jobA = jobSystem.AddJob(graphHandle, "JobA", [](){...});
JobHandle jobB = jobSystem.AddJob(graphHandle, "JobB", [](){...});
jobSystem.AddJobToJobDependency(jobA, jobB);
jobSystem.SubmitJobGraph(graphHandle);

Sync point example:

SyncPointHandle syncX = jobSystem.CreateSyncPoint(graphHandle, "SyncX");
jobSystem.AddJobToSyncPointDependency(jobA, syncX);
jobSystem.AddSyncPointToJobDependency(syncX, jobB);

Coroutine Example

A full C++20 coroutine example demonstrates task creation, sleeping, looping, RPC calls, and waiting for child coroutines:

// C++20 coroutine
auto clientProxy = mRpcClient->CreateServiceProxy("mmo.HeartBeat");
mScheduler.CreateTask20([clientProxy]() -> rstudio::logic::CoResumingTaskCpp20 {
    auto* task = rco_self_task();
    printf("step1: task is %llu\n", task->GetId());
    co_await rstudio::logic::cotasks::NextFrame{};
    printf("step2 after yield!\n");
    int c = 0;
    while (c < 5) {
        printf("in while loop c=%d\n", c);
        co_await rstudio::logic::cotasks::Sleep(1000);
        c++;
    }
    for (c = 0; c < 5; c++) {
        printf("in for loop c=%d\n", c);
        co_await rstudio::logic::cotasks::NextFrame{};
    }
    printf("step3 %d\n", c);
    auto newTaskId = co_await rstudio::logic::cotasks::CreateTask(false, []()->logic::CoResumingTaskCpp20 {
        printf("from child coroutine!\n");
        co_await rstudio::logic::cotasks::Sleep(2000);
        printf("after child coroutine sleep\n");
    });
    printf("new task create in coroutine: %llu\n", newTaskId);
    printf("Begin wait for task!\n");
    co_await rstudio::logic::cotasks::WaitTaskFinish{ newTaskId, 10000 };
    printf("After wait for task!\n");
    rstudio::logic::cotasks::RpcRequest rpcReq{clientProxy, "DoHeartBeat", rstudio::reflection::Args{3}, 5000};
    auto* rpcret = co_await rpcReq;
    if (rpcret->rpcResultType == rstudio::network::RpcResponseResultType::RequestSuc) {
        assert(rpcret->totalRet == 1);
        auto retval = rpcret->retValue.to
();
        assert(retval == 4);
        printf("rpc coroutine run suc, val = %d!\n", retval);
    } else {
        printf("rpc coroutine run failed! result = %d \n", (int)rpcret->rpcResultType);
    }
    co_await rstudio::logic::cotasks::Sleep(5000);
    printf("step4, after 5s sleep\n");
    co_return rstudio::logic::CoNil;
});

The execution result shows the coroutine progressing through steps, loops, child coroutine creation, RPC handling, and final sleep.

Limitations and Future Work

Asio’s scheduler and coroutine parts are separate; coroutine support is currently limited to the main thread.

libunifex’s execution implementation relies heavily on C++20 ranges and custom CPOs, making it hard to understand and maintain.

The C++17 compatibility layer introduces massive macro usage and SFINAE complexity.

Coroutine integration in libunifex is questionable; coroutines are better suited as glue rather than core async nodes.

To address these issues, the author proposes:

Using asio’s scheduler as the underlying scheduler for execution, removing libunifex’s custom scheduler.

Adopting execution’s sender/receiver model for clearer dependency graphs while keeping the familiar asio timer.

Creating custom sender adapters to simplify business‑level usage.

Integrating execution with other concurrency libraries (e.g., ISPC) for mixed‑environment workloads.

The article concludes with a roadmap: first introduce execution fundamentals and libunifex, then dive into asio’s scheduler and coroutine implementation, and finally apply these concepts back to the author’s framework.

Author Bio

Shen Fang, a backend engineer at Tencent, focuses on cross‑engine server development and game‑play technologies.

Game developmentasynchronousCoroutineC++20asiolibunifexexecution
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.