Coroutines come in two flavors: asymmetric and symmetric. Asymmetric coroutines yields to a scheduler, and they’re the ones we’ll focus on. Symmetric coroutines yield a specific destination; for example, a different coroutine.
While coroutines are a pretty broad concept in general, the introduction of coroutines as objects in programming languages is what really makes this way of handling concurrency rival the ease of use that OS threads and fibers/green threads are known for.
You see when you write async in Rust or JavaScript, the compiler re-writes what looks like a normal function call into a future (in the case of Rust) or a promise (in the case of JavaScript). Await, on the other hand, yields control to the runtime scheduler, and the task is suspended until the future/promise you’re awaiting has finished.
This way, we can write programs that handle concurrent operations in almost the same way we write our normal sequential programs.
Our JavaScript program can now be written as follows:
async function run() {
await timer(200);
await timer(100);
await timer(50);
console.log(“I’m the last one”);
}
You can consider the run function as a pausable task consisting of several sub-tasks. On each “await” point, it yields control to the scheduler (in this case, it’s the well-known JavaScript event loop).
Once one of the sub-tasks changes state to either fulfilled or rejected, the task is scheduled to continue to the next step.
When using Rust, you can see the same transformation happening with the function signature when you write something such as this:
async fn run() -> () { … }
The function wraps the return object, and instead of returning the type (), it returns a Future with an output type of ():
Fn run() -> impl Future<Output = ()>
Syntactically, Rust’s futures 0.1 was a lot like the promise example we just showed, and the Rust futures we use today have a lot in common with how async/await works in JavaScript..
This way of rewriting what look like normal functions and code into something else has a lot of benefits, but it’s not without its drawbacks.
As with any stackless coroutine implementation, full pre-emption can be hard, or impossible, to implement. These functions have to yield at specific points, and there is no way to suspend execution in the middle of a stack frame in contrast to fibers/green threads. Some level of pre-emption is possible by having the runtime or compiler insert pre-emption points at every function call, for example, but it’s not the same as being able to pre-empt a task at any point during its execution.