DEV Community

Bastian Gruber
Bastian Gruber

Posted on • Edited on

Explained: How does async work in Rust?

This article will outline an overview of the why and how async exists in Rust. The differences between concurrency, parallelism and asynchronous code won't be covered.

Async Concept

Asynchronous programming is a concept which allows not blocking the program workflow when waiting for the results of certain actions. So you can open a large file or query a database, but your program will continue processing each line afterwards.

This concept was first needed on the kernel level, because you want to be able to listen to music while you type something on your keyboard. On a software level, this was achieved through multi-threading. On the CPU side, you can have multiple processes running on each core at the same time.

Later on, web servers came into play and needed to be able to hold millions of connections while performing I/O tasks. To be able to do this in a non-blocking way, we can either use threads on the kernel level, or implement our own way of handling threads and events.

What's needed and why

The kernel already has the concept implemented (through threads and other concepts), however they are quite "expensive", which means there is just a finite amount of resources available and dealing with this problem on OS level adds a whole new level of complexity.

Therefore it would be nice to handle our internal async flow on program level. We need a so called runtime, which can handle async code and is able to communicate to the kernel.

The general idea is:

  • Implement your own way of handling threads and queues on program level (green threads)
  • Add syntactic sugar to your language so the runtime/compiler can identify async parts of the code
  • Add async types so they can notify when they are "done"

Async overview

Instead of dealing with Strings for example, an async type needs to have certain states (processing and done). The runtime can handle these types and set the state in them. Afterwards in your code you can access the value at a later point or wait for them to be done before you continue.

Workflow

You mark a method in your code as async, in this async method you can now use your async types. You can either wait for them to finish ("fetch data from GitHub...") or you "start" them, continue with your flow and later on check if they finished and use the value from them.

Once done writing the code, you need a runtime which can take this async part of your code and actually run it. The runtime needs also to take processes from the queue and hand it over to the operating system, since there is where the real work happens.

After the operating system is done with the processing, it will notify the runtime, which in return will set the state inside the async type and hand it back to the program workflow.

NodeJS vs. Go. vs. Rust

Lets look at how Node, Go and Rust are implementing the concepts we talked about, namely: Syntax, Type and Runtime.

NodeJS

In NodeJS you have the async/await syntax and Promises. You can await a Promise aka an action which might need more time to process.

NodeJS async

const async_method = async () => {
    const dbResults = await dbQuery();
    const results = await serviceCall(dbResults);
    console.log(results);
}
Enter fullscreen mode Exit fullscreen mode

Go

In Go, you start goroutines instead of Promises. And instead of async/await you simple write go method_name(). Instead of V8, Go ships with its own Go runtime.

Go async

f(greeting string) {
    fmt.Println(greeting, ", World!")
}

go f("Hello")

Enter fullscreen mode Exit fullscreen mode

Rust

The Rust Async ecosystem is still in progress and not final yet. The proposal here is to also use async/await, instead of Promises and Goroutines you have Futures.

The Rust Language Team decided not to include any runtime. Rust wants to be as small as possible, and to be able to swap parts in and out as needed. Therefore you need to rely on crates to provide the appropiate runtime for you.

The most popular one is tokio, which uses mio internally as its event queue. Even other runtimes are using mio since it’s providing abstraction over kernel methods like epoll , kqueue and IOCP.

Rust async

One special feature about Rust is also that you have to "start" a Future. So just declaring it like a Promise in NodeJS or writing go name_of_goroutine() doesn't trigger the Future to do something yet. So in case you are using tokio, you need to:

let response = client.get("http://httpbin.org")

let response_is_ok = response
    .and_then(|resp| {
        println!("Status: {}", resp.status());
        Ok(())

tokio::run(response_is_ok);
Enter fullscreen mode Exit fullscreen mode

In the hopefully not so distant future, you can use async in Rust like this:

#[async]
fn async_function_name(...) -> Result<ReturnType, ErrorType> {
    let db_results = await!(query_database());
    let more_data = await!(fetch_another_service(db_resukts));
    process(more_data)
}
Enter fullscreen mode Exit fullscreen mode

The async/await syntax is still in process and needs to be approved, merged and parts of the language adjusted to the new form.

Rust Async in Detail

Lets zoom in a bit on how a runtime works or can work:

Rust tokio async

Tokio is using internally the Reactor-Executor pattern.

What tokio and other runtimes want to achieve is a highly scalable server for high raw data throughput. They don't want to block when doing I/O operations. We have basically two options here: Thread-Based or Event-Driven Architecture. To make it short: Thread-Based is limiting because of the limited physical resources.

So Event-Driven is best in our case. It is registering incoming Future requests and saves a pointer to the async function handler. It then triggers an event in the kernel. Once the I/O operation is done, we call the pointer and execute the async method with the results from the I/O (kernel).

For this, we need a Reactor, which notifies if data is coming over the network or a file writing operation is in progress, and an executor which takes this data and executes the async function (Future) with it.

In addition, each runtime needs to understand kernel methods (like epoll) for starting I/O operations. For Rust there is a crate called mio which implements these kernel methods. Tokio is using mio internally.

Is it usable?

There is a lot happening at the moment in async Rust world. It will take a bit of time to have a final version out there which is easy to use and understand. Until then you can use your web frameworks like you are used to, since they already ship with a runtime.

Dropbox for example is using Futures in combination with tokio in production to serve data from the disk on Dropbox's servers. Futures is in Rust stable in version 0.1, and in Rust nightly in version 0.3. The runtime tokio is relying on Rust stable, so it is using Futures 0.1.

You can transform 0.3 into 0.1 Futures and vice-versa via the compat module.

Rust needs a few more months to get ready for its easy to use and powerful Futures. They are less expensive as in other languages, and you can have a thin or thick runtime, it's totally up to you.

Get started

As mentioned, tokio is one of the runtimes you can use. Another one is a combination of Romio and Juliex.

If you are building web applications, there is a crate called hyper, which already includes tokio. So here you can use Futures 0.1 in you application.

Keep up to date

You can check out the website areweasyncyet to follow the progress on async Rust. Similarly, arewewebyet is tracking the progress of frameworks and tools around building web applications.

Further reading
  1. What Are Tokio and Async IO All About?
  2. Futures in Rust and Haskell
  3. Fast async in NodeJS
  4. Go by example: Goroutines
  5. Tokio: Building a runtime

Top comments (8)

Collapse
 
rhymes profile image
rhymes • Edited

Hi Bastian! Nice overview of Rust's async support!

I think there's a bit of unintentional misleading in how you worded the following part:

The kernel already has the concept implemented (through threads and other concepts), however they are quite "expensive", which means there is just a finite amount of resources available and dealing with this problem on OS level adds a whole new level of complexity.

Kernels do have async IO implemented, so it's not "expensive" (I guess you're referring of the cost of threading here), you do talk about these kernel syscalls afterwards when you mention mio

Collapse
 
gruberb profile image
Bastian Gruber

Thank you for the feedback! I'll update the article accordingly in the near future!

Collapse
 
creepy_owlet profile image
Dmitry Tantsur

Hi Bastian,

I've learned the hard way that this is not quite right:

tokio::run(response_is_ok);

Depending on what happens inside client, this can hang forever. This actually does happen with reqwest::async since Hyper tend to spawn long-living futures, presumably for keep-alive connections in its pool.

What I had to use to make it reliable is

let mut rt = Runtime::new().expect("Cannot create a runtime");
rt.block_on(response_is_ok).expect("Failed");

As a nice side effect you get the outcome of the future as a Result, you don't have to force it to be Future<Item = (), Error = ()>.

Collapse
 
gruberb profile image
Bastian Gruber

Thats super helpful, thank you so much Dmitry!

Collapse
 
michall profile image
Michal Podhradsky

Thanks for a nice article - just one note when you talk about async concept On a software level, this was achieved through multi-threading -> in Node.js it is handled by Event loop on single thread rather than multi-threading.. Anyway it is great that tokio is going the same path Node.js went - definitely looking forward for using Rust on scalable server stuff in the future

Collapse
 
0xpr03 profile image
Aron Heinecke

As said on gitter: I wish I'd have these nice drawings when I started using tokio & futures :)

Collapse
 
nickytonline profile image
Nick Taylor • Edited

Been enjoying your posts about rust Bastian. 👏 I haven't dug too much into it yet, but hopefully will at some point this year.

Collapse
 
gruberb profile image
Bastian Gruber

Thanks for the feedback Nick! Always happy when I hit the right tone and depth, and the work reaches the people who are interested in it.