Without boats, dreams dry up In the early-to-mid 2010s, there was a renaissance in languages exploring new ways of doing
concurrency. In the midst of this renaissance, one abstraction for achieving concurrent operations
that was developed was the “future” or “promise” abstraction, which represented a unit of work that
will maybe eventually complete, allowing the programmer to use this to manipulate control flow in
their program. Building on this, syntactic sugar called “async/await” was introduced to take futures
and shape them into the ordinary, linear control flow that is most common. This approach has been
adopted in many mainstream languages, a series of developments that has been controversial among
practitioners. There are two excellent posts from that period which do a very good job of making the case for the
two sides of the argument. I couldn’t more strongly recommend reading each these posts in full: The thesis of Eriksen’s post is that futures provide a fundamentally different model of concurrency
from threads. Threads provide a model in which all operations occur “synchronously,” because the
execution of the program is modeled as a stack of function calls, which block when they need to wait
for concurrently executing operations to complete. In contrast, by representing concurrent
operations as asynchronously completing “futures,” the futures model enabled several advantages
cited by Eriksen. These are the ones I find particularly compelling: Nystrom takes the counter-position. He starts by imagining a language in which all functions are
“colored,” either Some of the remarks Nystrom makes are not relevant to async Rust. For example, he says that if you
call a function of one color as if it were a function of the other, dreadful things could happen: When calling a function, you need to use the call that corresponds to its color. If you get it
wrong … it does something bad. Dredge up some long-forgotten nightmare from your childhood like
a clown with snakes for arms hiding under your bed. That jumps out of your monitor and sucks out
your vitreous humour. This is plausibly true of JavaScript, an untyped language with famously ridiculous semantics,
but in a statically typed language like Rust, you’ll get a compiler error which you can fix and move
on. One of his main points is also that calling a [Async/await] lets you make asynchronous calls just as easily as you can synchronous ones, with the tiny
addition of a cute little keyword. You can nest Of course, he also says this, which is the crux of the argument about the “function coloring
problem”: But… you still have divided the world in two. Those async functions are easier to write, but
they’re still async functions. You’ve still got two colors. Async-await solves annoying rule #4: they make red functions not much
worse to call than blue ones. But all of the other rules are still there. Futures represent asynchronous operations differently from synchronous operations. For Eriksen, this
provides additional affordances which are the key advantage of futures. For Nystrom, this is just an
another hurdle to calling functions which return futures instead of blocking. As you might expect if you’re familiar with this blog, I fall pretty firmly on the side of Eriksen.
So it has not been easy on me to find that Nystrom’s views have been much more popular with the sort
of people who comment on Hacker News or write angry, over-confident rants on the internet. A few
months ago I wrote a post exploring the history of how Rust came to have the
futures abstraction and async/await syntax on top of that, as well as a follow-up
post describing the features I would like to see added to async Rust to make it
easier to use. Now I would like to take a step back and re-examine the design of async Rust in the context of this
question about the utility of the futures model of concurrency. What has the use of futures
actually gotten us in async Rust? I would like us to imagine that there could be a world in which
the difficulties of using futures have been mitigated or resolved & the additional affordances they
provide make async Rust not only just as easy to use as non-async Rust, but actually a better
experience overall. Last week, Tyler Mandry published an interesting post about a problem that the Rust
project calls “Barbara battles buffered streams.” Tyler does a good job explaining the issue, but
briefly the problem is that the buffering adapters from the futures library ( I think we can better understand the problem if we examine it visually. First, let’s consider the
control flow that occurs when a user processes a normal, non-asynchronous The for loop first calls the iterator’s When we developed the Pin API, our vision was that “ordinary users” - that is, users using
the “high-level” registers of Rust, would never have to interact with it. We intended
that only users implementing Futures by hand, in the “low-level” register, would have to deal with
that additional complexity. And the benefit that would accrue to all users is that futures, being
immovable while polling, could store self-references in their state. Things haven’t gone perfectly according to plan. The benefits of In my experience, there a three main ways that this happens. Two of them can be solved by better
affordances for I wanted to follow up my previous post with a small note elaborating on the use of
coroutines for asynchrony and iteration from a more abstract perspective. I realized the point I
made about It’s also an excuse to draw another fun ASCII diagram, and I’ve got to put that Berkeley Mono
license to good use. In my previous post, I said that the single best thing the Rust project could do for
users is stabilize AsyncIterator. I specifically meant the interface that already exists in
the standard library, which uses a method called The main thing holding up the Yosh Wuyts, a leading contributor to the async working group, has written his own post about
why the async next design is preferable to poll next. A lot of this is structured as an attempted
refutation of points made by me and others about problems with the async next design. I do not find
the argument in this post compelling, and my position about what the project should do is unchanged.
I’ve written this to attempt to express again, in more detail and more definitively, why I believe
the project should accept the poll next design and stabilize Four years ago today, the Rust async/await feature was released in version 1.39.0. The announcement
post says that “this work has been a long time in development – the key
ideas for zero-cost futures, for example, were first proposed by Aaron Turon and Alex Crichton in
2016”. It’s now been longer since the release of async/await than the time between the first design
work on futures and the release of async/await syntax. Despite this, and despite the fact that
async/await syntax was explicitly shipped as a “minimum viable product,” the Rust project has
shipped almost no extensions to async/await in the four years since the MVP was released. This fact has been noticed, and I contend it is the primary controllable reason that async Rust has
developed a negative reputation (other reasons, like its essential complexity, are
not in the project’s control). It’s encouraging to see project leaders like Niko Matsakis
recognize the problem as well. I want to outline the features that I think async Rust needs
to continue to improve its user experience. I’ve organized these features into features that I think
the project could ship in the short term (say, in the next 18 months), to those that will take
longer (up to three years), and finally a section on a potential change to the language that I think
would take years to plan and prepare for. Async/await syntax in Rust was initially released to much fanfare and excitement. To quote Hacker
News at the time: This is going to open the flood gates. I am sure lot of people were just waiting for this moment
for Rust adoption. I for one was definitely in this boat. Also, this has all the goodness: open-source, high quality engineering, design in open, large
contributors to a complex piece of software. Truly inspiring! Recently, the reception has been a bit more mixed. To a quote a comment on Hacker News again,
discussing a recent blog post on the subject: I genuinely can’t understand how anybody could look at the mess that’s Rust’s async and think that
it was a good design for a language that already had the reputation of being very complicated to
write. I tried to get it, I really did, but my god what a massive mess that is. And it contaminates
everything it touches, too. I really love Rust and I do most of my coding in it these days, but
every time I encounter async-heavy Rust code my jaw clenches and my vision blurs. Of course, neither of these comments are completely representative: even four years ago, some people
had pointed concerns. And in the same thread as this comment about jaws clenching and vision
blurring, there were many people defending async Rust with equal fervor. But I don’t think I would
be out of pocket to say that the nay-sayers have grown more numerous and their tone more strident as
time has gone on. To some extent this is just the natural progression of the hype cycle, but I also
think as we have become more distant from the original design process, some of the context has been
lost. Between 2017 and 2019, I drove the design of async/await syntax, in collaboration with others and
building on the work of those who came before me. Forgive me if I am put a bit off when someone says
that they don’t know how anyone could look at that “mess” and “think that it was a good design,” and
please indulge me in this imperfectly organized and overly long explanation of how async Rust came
to exist, what its purpose was, and why, in my opinion, for Rust there was no viable alternative. I
hope that along the way I might shed more light on the design of Rust in a broader and deeper sense,
at least slightly, and not merely regurgitate the justifications of the past. I want to address a controversy that has gripped the Rust community for the past year or so: the
choice by the prominent async “runtimes” to default to multi-threaded executors that perform
work-stealing to balance work dynamically among their many tasks. Some Rust users are
unhappy with this decision, so unhappy that they use language I would characterize as
melodramatic: The Original Sin of Rust async programming is making it multi-threaded by default. If premature
optimization is the root of all evil, this is the mother of all premature optimizations, and it
curses all your code with the unholy It’s always off-putting to me that claims written this way can be taken seriously as a technical
criticism, but our industry is rather unserious. I want to wrap up my consideration of the idea of adding new auto traits to Rust with some notes
from a conversation I had with Ariel Ben-Yehuda. You can read these two previous posts for context: In my previous post, I described the idea of using an edition mechanism to introduce a new
auto trait. I wrote that the compiler would need to create an “unbreakable firewall” to prevent
using The response has been pretty optimistic that ensuring this would be possible, even though I wrote in
the post myself that I “despair” over how difficult it was. I’ve received a great example from Ariel
Ben-Yehuda which demonstrates how this problem is more difficult to solve than you would probably
think.Let futures be futures
flatMap
operator to chain many
concurrent network requests after one initial network request.BLUE
or RED
. In his imaginary language, the important difference
between the two colors of function is that RED
functions can only be called from other RED
functions. He posits this distinction as a great frustration for users of the language,
because having to track two different kinds of functions is annoying and in his language RED
functions must be called using an annoyingly baroque syntax. Of course, what he’s referring to is
the difference between synchronous functions and asynchronous functions. Exactly what Eriksen cites
as an advantage of futures - that functions returning futures are different from functions that
don’t return futures - is for Nystrom it’s greatest weakness.RED
function is much more “painful” than
calling a BLUE
function. As Nystrom later elaborates in his post, he is referring to the
callback-based API commonly used in JavaScript in 2015, and he says that async/await syntax resolves
this problem:await
calls in expressions, use them in
exception handling code, stuff them inside control flow.poll_progress
Buffered
and
BufferUnordered
) do not interact well with for await
if the processing in the body is
asynchronous (i.e. if it contains any await
expressions).Iterator
using a for
loop: ┌── SOME ────────────────┐
╔═══════════════╗ ╔═══════▼═══════╗
║ ║▐▌ ║ ║▐▌
──────▶ NEXT ║▐▌ ║ LOOP BODY ║▐▌
║ ║▐▌ ║ ║▐▌
╚════════════▲══╝▐▌ ╚═══════════════╝▐▌
▀▀│▀▀▀▀▀▀▀▀▀│▀▀▀▀▘ ▀▀▀▀▀▀▀│▀▀▀▀▀▀▀▀▀▘
│ └───────────────────┘
└── NONE ──────────────────────────────▶
next
method, and then passes the resulting item (if there
is one) to the loop body. When there are no more items, it exits the loop.Three problems of pinning
Pin
have certainly been accrued -
everyone is writing self-referential async functions all the time, and low-level concurrency
primitives in all the major runtimes take advantage of Pin
to implement intrusive linked lists
internally. But Pin
still sometimes rears its ugly head into “high-level” code, and users are
unsurprisingly frustrated and confused when that happens.AsyncIterator
(a part of why I have been pushing stabilizing this so hard!). The
third is ultimately because of a mistake that we made when we designed Pin
, and without a breaking
change there’s nothing we could about it. They are:
…Future
in a loop.Stream::next
.Future
behind a pointer (e.g. a boxed future).Coroutines, asynchronous and iterative
AsyncIterator
being the product of Iterator
and Future
makes a bit more sense if
you also consider the “base case” - a block of code that is neither asynchronous nor iterative.poll_next
poll_next
. Ideally this would have happened years
ago, but the second best time would be tomorrow.AsyncIterator
stabilization is a commitment by some influential
contributors of the project to pursue an alternative design. This design, which I’ll call the
“async next” design, proposes to use an async method for the interface instead of the poll method of
the “poll next” design implemented today. In my opinion, continuing to pursue this design is a
mistake. I’ve written about this before, but I don’t have the sense my post was
fully received by the Rust project.AsyncIterator
now.A four year plan for async Rust
Why async Rust?
Thread-per-core
Send + 'static
, or worse yet Send + Sync + 'static
, which
just kills all the joy of actually writing Rust.Generic trait methods and new auto traits
Follow up to "Changing the rules of Rust"
!Leak
types from the new edition with code from the old edition that assumes values of all
types can be leaked.