I have a Rust async server based on the Tokio runtime. It has to process a mix of latency-sensitive I/O-bound requests, and heavy CPU-bound requests.
I don\'t want t
Tokio's error message was misleading. The problem was due to Runtime
object being dropped in an async context.
The workaround is to use Handle
, not Runtime
directly, for spawning tasks on the other runtime.
fn main() {
let mut main_runtime = tokio::runtime::Runtime::new().unwrap();
let cpu_pool = tokio::runtime::Builder::new().threaded_scheduler().build().unwrap();
// this is the fix/workaround:
let cpu_pool = cpu_pool.handle().clone();
main_runtime.block_on(main_runtime.spawn(async move {
cpu_pool.spawn(async {}).await
}))
.unwrap().unwrap();
}
While Tokio already has a threadpool, the documentation of Tokio advises:
If your code is CPU-bound and you wish to limit the number of threads used to run it, you should run it on another thread pool such as rayon. You can use an oneshot channel to send the result back to Tokio when the rayon task finishes.
So, if you want to create a threadpool to make heavy use of CPU, a good way is to use a crate like Rayon and send the result back to the Tokio task.
Starting a Tokio runtime already creates a threadpool. The relevant options are
Roughly speaking, core_threads
controls how many threads will be used to process asynchronous code. max_threads
- core_threads
is how many threads will be used for blocking work (emphasis mine):
Otherwise as
core_threads
are always active, it limits additional threads (e.g. for blocking annotations) asmax_threads
-core_threads
.
You can also specify these options through the tokio::main attribute.
You can then annotate blocking code with either of:
See also:
spawn_blocking
can easily take all of the threads available in the one and only runtime, forcing other futures to wait on them
You can make use of techniques like a Semaphore to restrict maximum parallelism in this case.