What a ride, huh? If you’ve got to the end of this chapter, you’ve done a fantastic job, and I have good news for you: you pretty much know everything about how Rust’s futures work and what makes them special already. All the complicated topics are covered.
In the next, and last, chapter, we’ll switch over from our hand-made coroutines to proper async/await. This will seem like a breeze compared to what you’ve gone through so far.
Before we continue, let’s stop for a moment and take a look at what we’ve learned in this chapter.
First, we expanded our coroutine implementation so that we could store variables across wait points. This is pretty important if our coroutine/wait syntax is going to rival regular synchronous code in readability and ergonomics.
After that, we learned how we could store and restore variables that held references, which is just as important as being able to store data.
Next, we saw firsthand something that we’ll never see in Rust unless we implement an asynchronous system, as we did in this chapter (which is quite the task just to prove a single point). We saw how moving coroutines that hold self-references caused serious memory safety issues, and exactly why we need something to prevent them.
That brought us to pinning and self-referential structs, and if you didn’t know about these things already, you do now. In addition to that, you should at least know what a pin projection is and what we mean by structural pinning.
Then, we looked at the differences between pinning a value to the stack and pinning a value to the heap. You even saw how easy it was to break the Pin guarantee when pinning something to the stack and why you should be very careful when doing just that.
You also know about some tools that are widely used to tackle both pin projections and stack pinning and make both much safer and easier to use.
Next, we got firsthand experience with how we could use pinning to prevent the issues we had with our coroutine implementation.
If we take a look at what we’ve built so far, that’s pretty impressive as well. We have the following:
- A coroutine implementation we’ve created ourselves
- Coroutine/wait syntax and a preprocessor that helps us with the boilerplate for our coroutines
- Coroutines that can safely store both data and references across wait points
- An efficient runtime that stores, schedules, and polls the tasks to completion
- The ability to spawn new tasks onto the runtime so that one task can spawn hundreds of new tasks that will run concurrently
- A reactor that uses epoll/kqueue/IOCP under the hood to efficiently wait for and respond to new events reported by the operating system
I think this is pretty cool.
We’re not quite done with this book yet. In the next chapter, you’ll see how we can have our runtime run futures created by async/await instead of our own coroutine implementation with just a few changes. This enables us to leverage all the advantages of async Rust. We’ll also take some time to discuss the state of async Rust today, the different runtimes you’ll encounter, and what we might expect in the future.
All the heavy lifting is done now. Well done!