How to Bind Tokio Tasks to Specific CPU Cores with core_affinity_rs
This article demonstrates how to bind Tokio async tasks to specific CPU cores on Linux using the core_affinity_rs crate, showing code examples for single‑core and multi‑core affinity, performance monitoring on Ubuntu, and step‑by‑step modifications to the Tokio runtime builder.
Tokio is a popular asynchronous runtime in the Rust ecosystem. In production, you may want to bind a Tokio application to specific CPU cores to control load distribution.
First, a simple multi‑task program is presented:
<code>use tokio::runtime;
pub fn main() {
let rt = runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
rt.block_on(async {
for i in 0..8 {
println!("num {}", i);
tokio::spawn(async move {
loop {
let mut sum: i32 = 0;
for i in 0..100000000 {
sum = sum.overflowing_add(i).0;
}
println!("sum {}", sum);
}
});
}
});
}
</code>The program builds a Tokio runtime, spawns several asynchronous tasks, each running an infinite loop that repeatedly adds numbers using
overflowing_addand prints the result.
Running this on Ubuntu 20 with a 4‑core CPU shows load on every core (monitoring screenshot below):
To bind the load to a particular core, the
core_affinity_rscrate (https://github.com/Elzair/core_affinity_rs) can be used. It provides cross‑platform CPU affinity management for Linux, macOS, and Windows.
Modifying the code to set affinity on thread start:
<code>use tokio::runtime;
pub fn main() {
let core_ids = core_affinity::get_core_ids().unwrap();
println!("core num {}", core_ids.len());
let core_id = core_ids[1];
let rt = runtime::Builder::new_multi_thread()
.on_thread_start(move || {
core_affinity::set_for_current(core_id.clone());
})
.enable_all()
.build()
.unwrap();
rt.block_on(async {
for i in 0..8 {
println!("num {}", i);
tokio::spawn(async move {
loop {
let mut sum: i32 = 0;
for i in 0..100000000 {
sum = sum.overflowing_add(i).0;
}
println!("sum {}", sum);
}
});
}
});
}
</code>When the multi‑threaded runtime is built,
on_thread_startsets the CPU affinity for each worker thread. Monitoring shows the load confined to the chosen core:
To bind tasks across multiple cores, the following pattern can be used:
<code>pub fn main() {
let core_ids = core_affinity::get_core_ids().unwrap();
println!("core num {}", core_ids.len());
let rt = runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
let mut idx = 2;
rt.block_on(async {
for i in 0..8 {
println!("num {}", i);
let core_id = core_ids[idx];
if idx.eq(&(core_ids.len() - 1)) {
idx = 2;
} else {
idx += 1;
}
tokio::spawn(async move {
let res = core_affinity::set_for_current(core_id);
println!("{}", res);
loop {
let mut sum: i32 = 0;
for i in 0..100000000 {
sum = sum.overflowing_add(i).0;
}
println!("sum {}", sum);
}
});
}
});
}
</code>This code cycles through a subset of CPU cores (e.g., core 3 and core 4) and assigns each spawned task to the next core, achieving an even distribution of load. The monitoring screenshot below confirms that the workload is bound to the intended cores:
This concludes the discussion on CPU affinity for Tokio applications; stay tuned for the next topic.
JD Cloud Developers
JD Cloud Developers (Developer of JD Technology) is a JD Technology Group platform offering technical sharing and communication for AI, cloud computing, IoT and related developers. It publishes JD product technical information, industry content, and tech event news. Embrace technology and partner with developers to envision the future.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.