Why async Rust?

Async/await syntax in Rust was initially released to much fanfare and excitement. To quote Hacker News at the time:

This is going to open the flood gates. I am sure lot of people were just waiting for this moment for Rust adoption. I for one was definitely in this boat.

Also, this has all the goodness: open-source, high quality engineering, design in open, large contributors to a complex piece of software. Truly inspiring!

Recently, the reception has been a bit more mixed. To a quote a comment on Hacker News again, discussing a recent blog post on the subject:

I genuinely can’t understand how anybody could look at the mess that’s Rust’s async and think that it was a good design for a language that already had the reputation of being very complicated to write.

I tried to get it, I really did, but my god what a massive mess that is. And it contaminates everything it touches, too. I really love Rust and I do most of my coding in it these days, but every time I encounter async-heavy Rust code my jaw clenches and my vision blurs.

Of course, neither of these comments are completely representative: even four years ago, some people had pointed concerns. And in the same thread as this comment about jaws clenching and vision blurring, there were many people defending async Rust with equal fervor. But I don’t think I would be out of pocket to say that the nay-sayers have grown more numerous and their tone more strident as time has gone on. To some extent this is just the natural progression of the hype cycle, but I also think as we have become more distant from the original design process, some of the context has been lost.

Between 2017 and 2019, I drove the design of async/await syntax, in collaboration with others and building on the work of those who came before me. Forgive me if I am put a bit off when someone says that they don’t know how anyone could look at that “mess” and “think that it was a good design,” and please indulge me in this imperfectly organized and overly long explanation of how async Rust came to exist, what its purpose was, and why, in my opinion, for Rust there was no viable alternative. I hope that along the way I might shed more light on the design of Rust in a broader and deeper sense, at least slightly, and not merely regurgitate the justifications of the past.

Some background on terminology

The basic issue at stake in this debate is Rust’s decision to use a “stackless coroutine” approach to implementing user-space concurrency. A lot of terms are thrown around in this discussion and its reasonable not to be familiar with all of them.

The first concept we need to get straight is the very purpose of the feature: “user-space concurrency.” The major operating systems present a set of fairly similar interfaces to achieve concurrency: you can spawn threads, and perform IO on those threads using syscalls, which block that thread until they complete. The problem with these interfaces is that they involve certain overheads that can become a limiting factor when you want to achieve certain performance targets. These are two-fold:

  1. Context-switching between the kernel and userspace is expensive in terms of CPU cycles.
  2. OS threads have a large pre-allocated stack, which increases per-thread memory overhead.

These limitations are fine up to a certain point, but for massively concurrent programs they do not work. The solution is to use a non-blocking IO interface and schedule many concurrent operations on a single OS thread. This can be done by the programmer “by hand,” but modern languages frequently provide facilities to make this easier. Abstractly, languages have some way of dividing work into tasks and scheduling those tasks onto threads. Rust’s system for this is async/await.

The first axis of choice in this design space is between cooperative and preemptive scheduling. Must tasks “cooperatively” yield control back to the scheduling subsystem, or can they be “preemptively” stopped at some point while they’re running, without the task being aware of it?

A term that gets thrown around a lot in these discussions is coroutine, and it is used in somewhat contradictory ways. A coroutine is a function which can be paused and then later resumed. The big ambiguity is that some people use the term “coroutine” to mean a function which has explicit syntax for pausing and resuming it (this would correspond to a cooperatively scheduled task) and some people use it to mean any function that can pause, even if the pause is performed implicitly by a language runtime (this would also include a preemptively scheduled task). I prefer the first definition, because it introduces some manner of meaningful distinction.

Goroutines, on the other hand, are a Go language feature which enables concurrent, preemptively scheduled tasks. They have an API that is the same as a thread, but it is implemented as part of the language instead of as an operating system primitive, and in other languages they are often called virtual threads or else green threads. So by my definition, goroutines are not coroutines, but other people use the broader definition and say goroutines are a kind of coroutine. I’ll refer to this approach as green threads, because that’s been the terminology used in Rust.

The second axis of choice is between a stackful and a stackless coroutine. A stackful coroutine has a program stack in the same way that an OS thread has a program stack: as functions are called as part of the coroutine, their frames are pushed on the stack; when the coroutine yields, the state of the stack is saved so that it can be resumed from the same position. A stackless coroutine on the other hand stores the state it needs to resume in a different way, such as in a continuation or in a state machine. When it yields, the stack it was using is used by the operation that took over from it, and when it resumes it takes back control of the stack and that continuation or state machine is used to resume the coroutine where it left off.

One issue that is often brought up with async/await (in Rust and other languages) is the “function coloring problem” - a complaint that in order to get the result of an async function, you need to use a different operation (such as awaiting it) rather than call it normally. Both green threads and stackful coroutine mechanisms can avoid this outcome, because it’s that special syntax that is used to indicate that something special is happening to manage the stackless state of the coroutine (what specifically depends on the language).

Rust’s async/await syntax is an example of a stackless coroutine mechanism: an async function is compiled to a function which returns a Future, and that future is what is used to store the state of the coroutine when it yields control. The basic question at hand in this debate is whether Rust was correct to adopt this approach, or if it should have adopted a more Go-like “stackful” or “green thread” approach, ideally without explicit syntax that “colors” functions.

The development of async Rust

Green threads

A third Hacker News comment represents well the kind of remark that I often see in this debate:

The alternative concurrency model people want is structured concurrency via stackful coroutines and channels on top of a work stealing executor.

Until someone does the work to demo that and compare it to async/await with futures I don’t think there’s any productive discussion to be had.

Setting aside the references to structured concurrency, channels and a work stealing executor (completely orthogonal concerns), the bewildering thing about comments like this is that originally Rust did have a stackful coroutine mechanism, in the form of green threads. It was removed in late 2014, shortly before the 1.0 release. Understanding the reason why will help us get to the bottom of why Rust shipped async/await syntax.

A big issue for any green threading system - Rust’s or Go’s or any other language’s - is what to do about the program stack for these threads. Remember that one of the goals of a user-space concurrency mechanism is to reduce the memory overhead of the large, pre-allocated stack used by OS threads. Therefore, green thread libraries tend to try to adopt a mechanism to spawn threads with smaller stacks, and grow them only as needed.

One way to achieve this is so-called “segmented stacks,” in which the stack is a linked list of small stack segments; when the stack grows beyond the bound of its segment, a new segment is added to the list, and when it shrinks, that segment is removed. The problem with this technique is that introduces a high variability in the cost of pushing a stack frame onto the stack. If the frame fits in the current segment, this is basically free. If it doesn’t, it requires allocating a new segment. A particularly pernicious version of this is when a function call in a hot loop requires allocating a new segment. This adds an allocation and deallocation to every iteration of that loop, having a significant impact on performance. And it is entirely opaque to users, because users don’t know how deep the stack will be when a function is called. Both Rust and Go started with segmented stacks, and then abandoned this approach for these reasons.

Another approach is called “stack copying.” In this case, a stack is more like a Vec than a linked list: when the stack hits its limit, it is reallocated larger so that the limit isn’t hit. This allows stacks to start small and grow as needed, without the downsides of segmented stacks. The problem with this is that reallocating the stack means copying it, which means the stack will now be at a new location in memory. Any pointers into the stack are now invalid, and there needs to be some mechanism for updating them.

Go uses stack copying, and benefits from the fact that in Go pointers into a stack can only exist in the same stack, so it just needs to scan that stack to rewrite pointers. Even this requires runtime type information which Rust doesn’t keep, but Rust also allows pointers into a stack that aren’t stored inside that stack - they could be somewhere in the heap, or in the stack of another thread. The problem of tracking these pointers is ultimately the same as the problem of garbage collection, except that instead of freeing memory it is moving it. Rust could not adopt this approach because Rust does not have a garbage collector, so in the end it could not adopt stack copying. Instead, Rust solved the problem of segmented stacks by making its green threads large, just like OS threads. But this eliminated one of the key advantages of green threads.

Even in a situation like Go, which can have resizing stacks, green threads carry certain unavoidable costs when trying to integrate with libraries written in other languages. The C ABI, with its OS stack, is the shared minimum of every language. Switching code from executing on a green thread to running on the OS thread stack can be prohibitively expensive for FFI. Go just accepts this FFI cost; C# recently aborted an experiment with green threads for this reason.

This was especially problematic for Rust, because Rust is designed to support use cases like embedding a Rust library into a binary written in another language, and to run on embedded systems that don’t have the clock cycles or memory to operate a virtual threading runtime. To attempt to resolve this problem, the green threading runtime was made optional, and Rust could instead be compiled to run on native threads, using blocking IO. This was designed to be a compile time decision made by the final binary. Thus, for a time, there were two varieties of Rust, one of which used blocking IO and native threads, and one of which used non-blocking IO and green threads, and all code was intended to be compatible with both varieties. This did not play out well, and green threads were removed from Rust as a result of RFC 230, which enumerated the reasons:

  • The abstraction over green and native threads was not “zero-cost,” and resulted in unavoidable virtual calls and allocations when performing IO, which was not acceptable especially to native code.
  • It forced native threads and green threads to support identical APIs, even when that didn’t make sense.
  • It was not fully interoperable, because it was still possible to invoke native IO through FFI, even on a green thread.

Once green threads were removed, the problem of high performance user space concurrency remained. The Future trait and later the async/await syntax were developed to resolve that problem. But to understand that path, we need to take one further step back and look at Rust’s solution to a different problem.

Iterators

I contend the true beginning of the journey to async Rust is to be found in an old mailing list post from 2013 by a former contributor named Daniel Micay. This post has nothing to do with async/await or futures or non-blocking IO: it was a post about iterators. Micay proposed shifting Rust to use what were called “external” iterators, and it was this shift - and its effectiveness in combination with Rust’s ownership and borrowing model - that set Rust inexorably on the course toward async/await. No one knew that at the time, obviously.

Rust had always prohibited mutating state through a binding that was aliased with another variable - this edict “mutable XOR aliased” was as central to early Rust as it is today. But initially it enforced it with different mechanisms, not with lifetime analysis. At the time, references were just “argument modifiers,” similar in concept to things like the “inout” modifier from Swift. In 2012, Niko Matsakis had proposed and implemented the first version of Rust’s lifetime analysis, promoting references to real types and enabling them to be embedded into structs.

Though the shift to lifetime analysis has been rightly recognized for its enormous impact in making Rust what it is today, its symbiotic interaction with external iterators, and the fundamental importance of that API to settling Rust into its current niche, has not received enough attention. Before the adoption of “external” iterators, Rust used a kind of callback based approach to define iterators, something that in modern Rust would look like this:

enum ControlFlow {
    Break,
    Continue,
}

trait Iterator {
    type Item;

    fn iterate(self, f: impl FnMut(Self::Item) -> ControlFlow) -> ControlFlow;
}

Iterators defined this way call their callback on each element of the collection, unless it returns ControlFlow::Break, in which case they are meant to stop iterating. The body of a for loop was compiled to a closure that was passed to the iterator being looped over. Such iterators were much easier to write than external iterators, but there are two key problems with this approach:

  1. The language couldn’t guarantee that iteration actually stops running when the loop says to break, so you couldn’t rely on that for memory safety. This meant things like return references from a loop was not possible, because the loop could actually continue.
  2. They couldn’t be used to implement generic combinators that interleave multiple iterators, like zip, because the API doesn’t support iterating alternatively through one iterator and then another.

Instead, Daniel Micay proposed to shift Rust to use “external” iterators, which completely resolve these problems and have the interface Rust users are used to today:

(Very well-informed readers will be aware that Rust’s Iterator has an a provided method called try_fold which is functionally very similar to the internal iterator API and is used in the definition of some other iterator combinators because it can result in better code generation. But it isn’t the key underlying method by which all iterators are defined.)
trait Iterator {
    type Item;

    fn next(&mut self) -> Option<Self::Item>;
}

External iterators integrated perfectly with Rust’s ownership and borrowing system because they essentially compile to a struct which holds the state of iteration inside of itself, and which can therefore contain references to data structures being iterated over just like any other struct. And thanks to monomorphization, a complex iterator built by assembling multiple combinators also compiled into a single struct, making it transparent to the optimizer. The only problem was that they were harder to write by hand, because you need to define the state machine that will be used for iteration. Foreshadowing future developments, Daniel Micay wrote at the time:

In the future, Rust can have generators using a yield statement like C#, compiling down to a fast state machine without requiring context switches, virtual functions or even closures. This would eliminate the difficulty of coding recursive traversals by-hand with external iterators.

Progress on generators has not moved swiftly, though an exciting RFC was recently published that would suggest we may see this feature soon.

Even without generators, external iterators proved to be a great success, and the general value of the technique was recognized. For example, Aria Beingessner used a similar approach in the “Entry API” for accessing map entries. Tellingly, in the RFC for the API, she refers to it as “iterator-like.” What she means by this is that the API builds a state machine via series of combinators, which presented itself to the compiler as highly legible and thus optimizable. This technique had legs.

Futures

When they needed to replace green threads, Aaron Turon and Alex Crichton began by copying the API used in many other languages, which has come to be called futures or promises. APIs like this are based on what is called a “continuation passing style.” A future defined in this way takes a callback as an additional argument, called the continuation, and calls the continuation as its final operation when the future completes. This is how this abstraction is defined in most languages, and the async/await syntax of most languages is compiled into this sort of continuation passing style.

In Rust, that sort of API would have looked something like this:

trait Future {
    type Output;

    fn schedule(self, continuation: impl FnOnce(Self::Output));
}

Aaron Turon and Alex Crichton tried this approach, but as Aaron Turon wrote in an enlightening blog post, they quickly ran into the problem that using a continuation passing style too often required allocating the callback. Turon gives the example of join: join takes two futures, and runs them both concurrently. The continuation of join needs to be owned by both child futures, because whichever of them finishes last needs to execute it. This ended up requiring reference counting and allocations to implement, which wasn’t considered acceptable for Rust.

Instead, they examined how C programmers tend to implement async programming: in C, programmers handle non blocking IO by building a state machine. What they wanted was a definition of Future that could be compiled into the sort of state machine that C programmers would write by hand. After some experimentation, they landed on what they called a “readiness-based” approach:

enum Poll<T> {
    Ready(T),
    Pending,
}

trait Future {
    type Output;

    fn poll(&mut self) -> Poll<Self::Output>;
}

Instead of storing a continuation, a future is polled by some external executor. When a future is pending, it stores a way to wake that executor, which it will execute when it is ready to be polled again. By inverting control in this way, they no longer needed to store a callback for when a future completes, which allowed them to represent a future as a single state machine. They built a library of combinators on top of this interface, that all would be compiled into a single state machine.

Switching from a callback-based approach to an external driver, compiling a set of combinators into a single state machine, even the exact specification of these two APIs: all of this should sound very familiar if you read the previous section. The shift from continuations to polling is exactly the same shift that was performed with iterators in 2013! Once again, it was Rust’s ability to handle structs with lifetimes and therefore to handle stackless coroutines which borrow state from outside themselves that allowed it to optimally represent futures as state machines without violating memory safety. This pattern of building single-object state machines out of smaller components, whether applied to iterators or futures, is a key part of how Rust works. It falls out of the language almost naturally.

I’ll pause for a moment to highlight one difference between iterators and futures: combinators that interleave two iterators, like Zip, are not even possible with a callback-like approach, unless your language has some sort of native support for coroutines you’re building on top of. On the other hand, if you want to interleave two futures, like Join, the continuation based approach can support that: it just carries some runtime costs. This explains why external iterators are common in other languages, but Rust is unique in applying this transform to futures.

In its initial iteration, the futures library was designed with the principle that users would construct futures in much the same way that they constructed iterators: low-level library authors would use the Future trait, whereas users writing applications would use a set of combinators, provided by the futures library, to construct more complex futures out of simpler components. Unfortunately, users immediately faced frustrating compiler errors when they tried to follow this approach. The problem was that futures, when spawned, need to “escape” the surrounding context, and therefore can’t borrow state from that context: the task must own all of its state.

This was a problem for futures combinators, because often that state needs to be accessed in multiple combinators that form part of the chain of actions that make up the future. For example, it was common for users to call one “async” method on an object, and then another, which would be written like this:

foo.bar().and_then(|result| foo.baz(result))

The problem was that foo was borrowed both in the bar method and then in the closure passed to and_then. Essentially, what users wanted to do was store state “across an await point,” the await point being formed by the chaining of future combinators; this usually resulted in confounding and perplexing borrow-checker errors. The most accessible solution to this was to store that state in an Arc and Mutex, which is not zero-cost and more importantly was very unwieldy and awkward as your system grew in complexity. For example:

let foo = Arc::new(Mutex::new(foo));
foo.clone().lock().bar()
   .and_then(move |result| foo.lock().baz(result))

Despite the great benchmarks that futures had shown in the initial experimentation, the result of this limitation was that users weren’t able to use them to build complex systems. This is where I came into the story.

Async/await

In late 2017, it was clear that the futures ecosystem was failing to launch for reasons of bad user experience. It was always the end goal of the futures project to implement a so-called “stackless coroutine transform,” in which functions using async and await syntax operators could be transformed into functions that evaluate to futures, avoiding users having to write futures by hand. Alex Crichton had developed a macro based async/await implementation as a library, but this had gained almost no traction. Something needed to change.

One of the biggest problems with Alex Crichton’s macros was that it would produce an error if a user attempted to have a reference to future state that was held over an await point. This was really the same issue as the borrowing issue users encountered with futures combinators, appearing again in the new syntax. It was not possible for a future to hold a reference to its own state while awaiting, because this would need to be compiled into a self-referential struct, which Rust had no support for.

It’s interesting to compare this to the problem of green threads. One way we’ve explained the compilation of futures to state machines is to say that the state machine is a “perfectly sized stack” - unlike the stack of a green thread, which must grow to accommodate the state of unknown size that any thread stack may have, a compiled future (implemented by hand, with combinators, or with an async function) is exactly as large as it needs to be. So we don’t have the problem of growing this stack at runtime.

However, this stack is represented as a struct, and it is always safe to move structs in Rust. This means that even though we don’t need to move a future around while it’s being executed, according to the rules of Rust we need to be able to. Thus, the problem of stack pointers that we encountered with green threads re-emerged in the new system. This time, though, we had the advantage that we didn’t need to be able to move the future, we just needed to express that the future was immovable.

The initial attempt to implement this was to try to define a new trait, called Move, which would be used to exclude coroutines from APIs which can move them. This ran into some backwards compatibility problems that I have previously documented. My thesis for async/await had three main points:

  1. We needed async/await syntax in the language so that users could build complex futures using coroutine-like functions.
  2. Async/await syntax needed to support compiling those functions to self-referential structs, so that users could use references in coroutines.
  3. This feature needed to ship as soon as humanly possible.

The combination of these three points led me to search for an alternative solution to the Move trait, one that could be implemented without any major disruptive change to the language.

My initial plan to achieve this result was much worse than what we ended up with. I proposed that we would just make the poll method unsafe, and include as an invariant that once you have started polling a future, you cannot move it again. This was simple, immediately implementable, and extremely brute force: it would have made every hand-written future unsafe, and imposed a difficult to verify requirement with no assistance from the compiler. It likely would have ran aground on some soundness issue eventually, and it would certainly have been extremely controversial.

So it was wonderful that Eddy Burtescu made a few remarks that led me in the direction of a much better API, which would enable us to enforce the invariants required in a much more fine-grained way. This would eventually become the Pin type. The Pin type itself has been the source of a fair amount of consternation, but I think it was an undeniable improvement on the other options we were considering at the time, in that it was targeted, enforceable, and also shippable on time.

In retrospect, there are two categories of problems with the pinning approach:

  1. Backward compatibility: some interfaces that already existed (especially Iterator and Drop) should have supported immovable types for various reasons, and this has limited the options in developing the language further.
  2. Exposure to end users: our intention was that users writing “normal async Rust” would never have to deal with Pin. Mostly this has been true, but there are a few notable exceptions. Almost all of these would be fix-able with some syntax improvements. The only one that’s really bad (and embarrassing to me personally) is that you need to pin a future trait object to await it. This was an unforced error that would now be a breaking change to fix.

The only other decisions to be made about async/await were syntactic, which I will not unsettle in this already overly long post.

Organizational considerations

My reason in exploring all this history is to demonstrate that a series of facts about Rust led us inevitably into a specific design space. The first was that Rust’s lack of runtime made green threads a non-viable solution, both because Rust needs to support embedding (both embedding into other applications and running on embedded systems) and because Rust cannot perform the memory management necessary for green threads. The second was that Rust has a natural capacity for expressing coroutines compiled to highly optimizable state machines while still being memory safe, which we exploit not only for futures but also for iterators.

But there is another side to this history: why did we pursue a runtime system for user-space concurrency? Why have futures and async/await at all? This argument usually takes one of two forms: on the one hand, you have people who are used to managing user-space concurrency “by hand,” using an interface like epoll directly; these people sometimes sneer at async/await syntax as “webcrap.” On the other hand, some people just say “you aren’t gonna need it,” and propose using simpler OS concurrency like threads and blocking IO.

People implementing highly performant network services in languages without facilities for user-space concurrency like C tend to implement them using a hand-written state machine. This is exactly what the Future abstraction was designed to compile into, but without having to write the state machine by hand: the whole point of the coroutine transform is to write imperative code “as if your function never yields,” but have the compiler generate the state transitions to suspend it when it would block. The benefits of this are not insignificant. A recent curl CVE was ultimately caused by a failure to recognize state that needed to be saved during a state transition. This kind of logic error is easy to make when implementing a state machine by hand.

The goal of shipping async/await syntax in Rust was to ship a feature which avoided those bugs while still having the same performance profile. Systems like this, most often written in C or C++, were considered well within our addressable audience, given the level of control we provide and the lack of memory management runtime.

In early 2018, the Rust project had committed to the idea of releasing a new “edition” that year, to fix some of the syntactic issues that had emerged with 1.0. It was also decided to use this edition as an opportunity to promote a narrative around Rust being ready for prime-time; the Mozilla team was mostly compiler hackers and type theorists, but we had some basic idea about marketing and recognized the edition as an opportunity to get eyeballs on the product. I proposed to Aaron Turon that we should focus on four basic user stories which seemed like growth opportunities for Rust. These were:

  • Embedded systems
  • WebAssembly
  • Command-line interfaces
  • Network services

This remark was the jumping off point for the creation of the “Domain Working Groups”, which were intended to be cross-functional groups focused on a particular use “domains” (in contrast to the pre-existing “teams” controlling some technical or organizational bailiwick). The concept of working groups in the Rust project has morphed since then and mostly lost this sense, but I digress.

The work on async/await was pioneered by the “network services” working group, which eventually become known as simply the async working group (and still exists under this name today). However, we were also acutely aware that given its lack of runtime dependencies, async Rust could also be of great service in the other domains, especially embedded systems. We designed the feature with both of these use cases in mind.

It was clear, though usually left unsaid, that what Rust needed to succeed was industry adoption, so that it could continue to receive support once Mozilla stopped being willing to fund an experimental new language. And it was clear that the most likely path to short-term industry adoption was in network services, especially those with a performance profile that compelled them at the time to be written in C/C++. This use case fit the niche of Rust perfectly - these systems need high degrees of control to achieve their performance requirements but avoiding exploitable memory bugs is critical because they are exposed to the network.

The other advantage of network services was that this wing of the software industry has the flexibility and appetite to rapidly adopt a new technology like Rust. The other domains were - and are! - viable long term opportunities for Rust, but they were seen as not as quick to adopt new technology (embedded), depended on a new platform that had not yet seen widespread adoption itself (WebAssembly), or were not a particularly lucrative industrial application that could lead to funding for the language (CLIs). (Malicious, illiterate morons sometimes take this sentence out of context on websites like Hacker News to try to claim that Rust adopted async/await to appeal to JavaScript users rather than for technical reasons. This is such a total misreading of this post - which outlines in detail the technical reasons Rust adopted async/await - that someone claiming this can only be a liar or an idiot, and you should disregard anything they say to you. I no longer have the patience to maintain even the pretence of politeness to people acting in such bad faith.)

In that regard, async/await has been phenomenally successful. Many of the most prominent sponsors of the Rust Foundation, especially those who pay developers, depend on async/await to write high performance network services in Rust as one of their primary use cases that justify their funding. Using async/await for embedded systems or kernel programming is also a growing area of interest with a bright future. Async/await has been so successful that the most common complaint about it is that the ecosystem is too centered on it, rather than “normal” Rust.

I don’t know what to tell users who would rather just use threads and blocking IO. Certainly, I think there are a lot of systems for which that is a reasonable approach. And nothing in the Rust language prevents them from doing it. Their objection seems to be that the ecosystem on crates.io, especially for writing network services, is centered on using async/await. Ocassionally, I see a library which uses async/await in a “cargo cult” way, but mostly it seems safe to assume that the author of the library actually wants to perform non-blocking IO and get the performance benefits of user-space concurrency.

None of us can control what everyone else decides to work on, and the fact of the matter is just that most people who release networking-related libraries on crates.io want to use async Rust, whether for business reasons or just out of interest. I’d like it to be easier to use those libraries in a non-async context (e.g. by bringing a pollster-like API into the standard library), but it’s hard to know what to say to people whose gripe is that the people putting code online for free don’t have exactly the same use case as them.

To be continued

Although I contend there was no alternative for Rust, I don’t believe async/await is the right alternative to any language. In particular, I think there is a possibility for a language which provides the same sort of reliability guarantees that Rust provides, but less control over the runtime representation of values, which uses stackful coroutines instead of stackless ones. I even think - if such a language supported such coroutines in such a way that they could be used for both iteration and concurrency - that language could do without lifetimes entirely while still eliminating errors that arise from aliased mutability. If you read his notes, you can see that this language is what Graydon Hoare was originally driving at, before Rust changed course to be a systems language that could compete with C and C++.

I think there are users of Rust who would be perfectly happy using this language if it existed, and I understand why they dislike that they have to deal with inherent complexity of the low level details. It used to be that these users complained about the myriad string types, now they are more likely to complain about async. I wish that a language for this use case with the same kinds of guarantees as Rust also existed, but the problem here isn’t with Rust.

And despite the fact that I believe async/await is the right approach for Rust, I also think its reasonable to be unhappy with the state of async ecosystem today. We shipped an MVP in 2019, tokio shipped a 1.0 in 2020, and things have been more stagnant since then than I think anyone involved would like. In a follow up post, I want to discuss the state of the async ecosystem today, and what I think the project could do to improve users’ experience. But this is already the longest blog post I’ve ever published, so for now I will have to leave it there.