Calling Runtime::new creates a multithreaded Tokio runtime, but Tokio also has a single-threaded runtime that you can create by using the runtime builder like this: Builder::new_current_thread().enable_all().build().unwrap(). If you do that, you end up with a peculiar problem: a deadlock. The reason for that is interesting and one that you should know about.
Tokio’s single-threaded runtime uses only the thread it’s called on for both the executor and the reactor. This is very similar to what we did in the first version of our runtime in Chapter 8. We used the Poll instance to park our executor directly. When both our reactor and executor execute on the same thread, they must have the same mechanism to park themselves and wait for new events, which means there will be a tight coupling between them.
When handling an event, the reactor has to wake up first to call Waker::wake, but the executor is the last one to park the thread. If the executor parked itself by calling thread::park (like we do), the reactor is parked as well and will never wake up since they’re running on the same thread. The only way for this to work is that the executor parks on something shared with the reactor (like we did with Poll). Since we’re not tightly integrated with Tokio, all we get is a deadlock.
Now, if we try to run our program once more, we get the following output:
Program starting
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
HelloAsyncAwait1
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
HelloAsyncAwait2
main: All tasks are finished
Okay, so now everything works as expected. The only difference is that we get woken up a few extra times, but the program finishes and produces the expected result.
Before we discuss what we just witnessed, let’s do one more experiment.
Isahc is an HTTP client library that promises to be executor agnostic, meaning that it doesn’t rely on any specific executor. Let’s put that to the test.
First, we add a dependency on isahc by typing the following:
cargo add [email protected]
Then, we rewrite our main function so it looks like this:
ch10/b-rust-futures-examples/src/main.rs (async_main3)
use isahc::prelude::*;
async fn async_main() {
println!(“Program starting”);
let url = “http://127.0.0.1:8080/600/HelloAsyncAwait1”;
let mut res = isahc::get_async(url).await.unwrap();
let txt = res.text().await.unwrap();
println!(“{txt}”);
let url = “http://127.0.0.1:8080/400/HelloAsyncAwait2”;
let mut res = isahc::get_async(url).await.unwrap();
let txt = res.text().await.unwrap();
println!(“{txt}”);
}
Now, if we run our program by writing cargo run, we get the following output:
Program starting
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
HelloAsyncAwait1
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
main: 1 pending tasks.
Sleep until notified.
HelloAsyncAwait2
main: All tasks are finished
So, we get the expected output without having to jump through any hoops.
Why does all this have to be so unintuitive?
The answer to that brings us to the topic of common challenges that we all face when programming with async Rust, so let’s cover some of the most noticeable ones and explain the reason they exist so we can figure out how to best deal with them.