Scheduling – How Programming Languages Model Asynchronous Program Flow

The OS can schedule tasks differently than you might expect, and every time you yield to the OS, you’re put in the same queue as all other threads and processes on the system.

Moreover, since there is no guarantee that the thread will resume execution on the same CPU core as it left off or that two tasks won’t run in parallel and try to access the same data, you need to synchronize data access to prevent data races and other pitfalls associated with multicore programming.

Rust as a language will help you prevent many of these pitfalls, but synchronizing data access will require extra work and add to the complexity of such programs. We often say that using OS threads to handle concurrency gives us parallelism for free, but it isn’t free in terms of added complexity and the need for proper data access synchronization.

The advantage of decoupling asynchronous operations from OS threads

Decoupling asynchronous operations from the concept of threads has a lot of benefits.

First of all, using OS threads as a means to handle concurrency requires us to use what essentially is an OS abstraction to represent our tasks.

Having a separate layer of abstraction to represent concurrent tasks gives us the freedom to choose how we want to handle concurrent operations. If we create an abstraction over concurrent operations such as a future in Rust, a promise in JavaScript, or a goroutine in GO, it is up to the runtime implementor to decide how these concurrent tasks are handled.

A runtime could simply map each concurrent operation to an OS thread, they could use fibers/green threads or state machines to represent the tasks. The programmer that writes the asynchronous code will not necessarily have to change anything in their code if the underlying implementation changes. In theory, the same asynchronous code could be used to handle concurrent operations on a microcontroller without an OS if there’s just a runtime for it.

To sum it up, using threads provided by the operating system to handle concurrency has the following advantages:
• Simple to understand
• Easy to use
• Switching between tasks is reasonably fast
• You get parallelism for free
However, they also have a few drawbacks:
• OS-level threads come with a rather large stack. If you have many tasks waiting simultaneously (as you would in a web server under heavy load), you’ll run out of memory pretty fast.
• Context switching can be costly and you might get an unpredictable performance since you let the OS do all the scheduling.
• The OS has many things it needs to handle. It might not switch back to your thread as fast as you’d wish.
• It is tightly coupled to an OS abstraction. This might not be an option on some systems.