Entries tagged - "futures"

FuturesUnordered and the order of futures


In my previous post, I wrote about the distinction between “multi-task” and “intra-task” concurrency in async Rust. I want to open this post by considering a common pattern that users encounter, and how they might implement a solution using each technique.

Let’s call this “sub-tasking.” You have a unit of work that you need to perform, and you want to divide that unit into many smaller units of work, each of which can be run concurrently. This is intentionally extremely abstract: basically every program of any significance contains an instance of this pattern at least once (often many times), and the best solution will depend on the kind of work being done, how much work there is, the arity of concurrency, and so on.

  • Using multi-task concurrency, each smaller of work would be its own task. The user would spawn each of these tasks onto an executor. The results of the task would be collected with a synchronization primitive like a channel, or the tasks would be awaited together with a JoinSet.

  • Using intra-task concurrency, each smaller unit will be a future run concurrently within the same task. The user would construct all of the futures and then use a concurrency primitive like join! or select! to combine them into a single future, depending on the exact access pattern.

Each of these approaches has its advantages and disadvantages. Spawning multiple tasks requires that each task be 'static, which means they cannot borrow data from their parent task. This is often a very annoying limitation, not only because it might be costly to use shared ownership (meaning Arc and possibly Mutex), but also because even if it isn’t going to be problematic in this context to use shared ownership, (I’d love to see this change! Cheap shared ownership constructs like Arc and Rc should have non-affine semantics so you don’t have to call clone on them.)

When you join multiple futures, they can borrow from state outside of them within the same task, but as I wrote in the previous post, you can only join a static number of futures. Users that don’t want to deal with shared ownership but have a dynamic number of sub-tasks they need to execute are left searching for another solution. Enter FuturesUnordered.

FuturesUnordered is an odd duck of an abstraction from the futures library, which represents a collection of futures as a Stream (in std parlance, an AsyncIterator). This makes it a lot like tokio’s JoinSet in surface appearance, but unlike JoinSet the futures you push into it are not spawned separately onto the executor, but are polled as the FuturesUnordered is polled. Much like spawning a task, every future pushed into FuturesUnordered is separately allocated, so representationally its very similar to multi-task concurrency. But because the FuturesUnordered is what polls each of these futures, they don’t execute independently and they don’t need to be 'static. They can borrow surrounding state as long as the FuturesUnordered doesn’t outlive that state.

In a sense, FuturesUnordered is a sort of hybrid between intra-task concurrency and multi-task concurrency: you can borrow state from the same task like intra-task, but you can execute arbitrarily many concurrent futures like multi-task. So it seems like a natural fit for the use case I was just describing when the user wants that exact combination of features. But FuturesUnordered has also been a culprit in some of the more frustrating bugs that users encounter when writing async Rust. In the rest of this post, I want to investigate the reasons why that is.

Let futures be futures


In the early-to-mid 2010s, there was a renaissance in languages exploring new ways of doing concurrency. In the midst of this renaissance, one abstraction for achieving concurrent operations that was developed was the “future” or “promise” abstraction, which represented a unit of work that will maybe eventually complete, allowing the programmer to use this to manipulate control flow in their program. Building on this, syntactic sugar called “async/await” was introduced to take futures and shape them into the ordinary, linear control flow that is most common. This approach has been adopted in many mainstream languages, a series of developments that has been controversial among practitioners.

There are two excellent posts from that period which do a very good job of making the case for the two sides of the argument. I couldn’t more strongly recommend reading each these posts in full:

The thesis of Eriksen’s post is that futures provide a fundamentally different model of concurrency from threads. Threads provide a model in which all operations occur “synchronously,” because the execution of the program is modeled as a stack of function calls, which block when they need to wait for concurrently executing operations to complete. In contrast, by representing concurrent operations as asynchronously completing “futures,” the futures model enabled several advantages cited by Eriksen. These are the ones I find particularly compelling:

  1. A function performing asynchronous operations has a different type from a “pure” function, because it must return a future instead of just a value. This distinction is useful because it lets you know if a function is performing IO or just pure computation, with far-reaching implications.
  2. Because they create a direct representation of the unit of work to be performed, futures can be composed in multiple ways, both sequentially and concurrently. Blocking function calls can only be composed sequentially without starting a new thread.
  3. Because futures can be composed concurrently, concurrent code can be written which more directly expresses the logic of what is occurring. Abstractions can be written which represent particular patterns of concurrency, allowing business logic to be lifted from the machinery of scheduling work across threads. Eriksen gives examples like a flatMap operator to chain many concurrent network requests after one initial network request.

Nystrom takes the counter-position. He starts by imagining a language in which all functions are “colored,” either BLUE or RED . In his imaginary language, the important difference between the two colors of function is that RED functions can only be called from other RED functions. He posits this distinction as a great frustration for users of the language, because having to track two different kinds of functions is annoying and in his language RED functions must be called using an annoyingly baroque syntax. Of course, what he’s referring to is the difference between synchronous functions and asynchronous functions. Exactly what Eriksen cites as an advantage of futures - that functions returning futures are different from functions that don’t return futures - is for Nystrom it’s greatest weakness.

Some of the remarks Nystrom makes are not relevant to async Rust. For example, he says that if you call a function of one color as if it were a function of the other, dreadful things could happen:

When calling a function, you need to use the call that corresponds to its color. If you get it wrong … it does something bad. Dredge up some long-forgotten nightmare from your childhood like a clown with snakes for arms hiding under your bed. That jumps out of your monitor and sucks out your vitreous humour.

This is plausibly true of JavaScript, an untyped language with famously ridiculous semantics, but in a statically typed language like Rust, you’ll get a compiler error which you can fix and move on.

One of his main points is also that calling a RED function is much more “painful” than calling a BLUE function. As Nystrom later elaborates in his post, he is referring to the callback-based API commonly used in JavaScript in 2015, and he says that async/await syntax resolves this problem:

[Async/await] lets you make asynchronous calls just as easily as you can synchronous ones, with the tiny addition of a cute little keyword. You can nest await calls in expressions, use them in exception handling code, stuff them inside control flow.

Of course, he also says this, which is the crux of the argument about the “function coloring problem”:

But… you still have divided the world in two. Those async functions are easier to write, but they’re still async functions.

You’ve still got two colors. Async-await solves annoying rule #4: they make red functions not much worse to call than blue ones. But all of the other rules are still there.

Futures represent asynchronous operations differently from synchronous operations. For Eriksen, this provides additional affordances which are the key advantage of futures. For Nystrom, this is just an another hurdle to calling functions which return futures instead of blocking.

As you might expect if you’re familiar with this blog, I fall pretty firmly on the side of Eriksen. So it has not been easy on me to find that Nystrom’s views have been much more popular with the sort of people who comment on Hacker News or write angry, over-confident rants on the internet. A few months ago I wrote a post exploring the history of how Rust came to have the futures abstraction and async/await syntax on top of that, as well as a follow-up post describing the features I would like to see added to async Rust to make it easier to use.

Now I would like to take a step back and re-examine the design of async Rust in the context of this question about the utility of the futures model of concurrency. What has the use of futures actually gotten us in async Rust? I would like us to imagine that there could be a world in which the difficulties of using futures have been mitigated or resolved & the additional affordances they provide make async Rust not only just as easy to use as non-async Rust, but actually a better experience overall.

Coroutines, asynchronous and iterative


I wanted to follow up my previous post with a small note elaborating on the use of coroutines for asynchrony and iteration from a more abstract perspective. I realized the point I made about AsyncIterator being the product of Iterator and Future makes a bit more sense if you also consider the “base case” - a block of code that is neither asynchronous nor iterative.

It’s also an excuse to draw another fun ASCII diagram, and I’ve got to put that Berkeley Mono license to good use.

A four year plan for async Rust


Four years ago today, the Rust async/await feature was released in version 1.39.0. The announcement post says that “this work has been a long time in development – the key ideas for zero-cost futures, for example, were first proposed by Aaron Turon and Alex Crichton in 2016”. It’s now been longer since the release of async/await than the time between the first design work on futures and the release of async/await syntax. Despite this, and despite the fact that async/await syntax was explicitly shipped as a “minimum viable product,” the Rust project has shipped almost no extensions to async/await in the four years since the MVP was released.

This fact has been noticed, and I contend it is the primary controllable reason that async Rust has developed a negative reputation (other reasons, like its essential complexity, are not in the project’s control). It’s encouraging to see project leaders like Niko Matsakis recognize the problem as well. I want to outline the features that I think async Rust needs to continue to improve its user experience. I’ve organized these features into features that I think the project could ship in the short term (say, in the next 18 months), to those that will take longer (up to three years), and finally a section on a potential change to the language that I think would take years to plan and prepare for.

Why async Rust?


Async/await syntax in Rust was initially released to much fanfare and excitement. To quote Hacker News at the time:

This is going to open the flood gates. I am sure lot of people were just waiting for this moment for Rust adoption. I for one was definitely in this boat.

Also, this has all the goodness: open-source, high quality engineering, design in open, large contributors to a complex piece of software. Truly inspiring!

Recently, the reception has been a bit more mixed. To a quote a comment on Hacker News again, discussing a recent blog post on the subject:

I genuinely can’t understand how anybody could look at the mess that’s Rust’s async and think that it was a good design for a language that already had the reputation of being very complicated to write.

I tried to get it, I really did, but my god what a massive mess that is. And it contaminates everything it touches, too. I really love Rust and I do most of my coding in it these days, but every time I encounter async-heavy Rust code my jaw clenches and my vision blurs.

Of course, neither of these comments are completely representative: even four years ago, some people had pointed concerns. And in the same thread as this comment about jaws clenching and vision blurring, there were many people defending async Rust with equal fervor. But I don’t think I would be out of pocket to say that the nay-sayers have grown more numerous and their tone more strident as time has gone on. To some extent this is just the natural progression of the hype cycle, but I also think as we have become more distant from the original design process, some of the context has been lost.

Between 2017 and 2019, I drove the design of async/await syntax, in collaboration with others and building on the work of those who came before me. Forgive me if I am put a bit off when someone says that they don’t know how anyone could look at that “mess” and “think that it was a good design,” and please indulge me in this imperfectly organized and overly long explanation of how async Rust came to exist, what its purpose was, and why, in my opinion, for Rust there was no viable alternative. I hope that along the way I might shed more light on the design of Rust in a broader and deeper sense, at least slightly, and not merely regurgitate the justifications of the past.

Futures and Segmented Stacks


This is just a note on getting the best performance out of an async program.

The point of using async IO over blocking IO is that it gives the user program more control over handling IO, on the premise that the user program can use resources more effectively than the kernel can. In part, this is because of the inherent cost of context switching between the userspace and the kernel, but in part it is also because the user program can be written with more specific understanding of its exact requirements.

Global Executors


One of the big sources of difficulty on the async ecosystem is spawning tasks. Because there is no API in std for spawning tasks, library authors who want their library to spawn tasks have to depend on one of the multiple executors in the ecosystem to spawn a task, coupling the library to that executor in undesirable ways. Ideally, many of these library authors would not need to spawn tasks at all.…

Asynchronous Destructors


The first version of async/await syntax is in the beta release, set to be shipped to stable in 1.39 on November 7, next month. There are a wide variety of additional features we could add to async/await in Rust beyond what we’re shipping in that release, but speaking for myself I know that I’d like to pump the breaks on pushing forward big ticket items in this space. Let’s let the ecosystem develop around what we have now before we start sprinting toward more big additions to the language.…

Update on await syntax


In my previous post I said that the lang team would be making our final decision about the syntax of the await operator in the May 23 meeting. That was last Thursday, and we did reach a decision. In brief, we decided to go forward with the preliminary proposal I outlined earlier: a postfix dot syntax, future.await. For more background, in addition the previous post on my blog, you can read this write up about some of the trade offs from April.…

A final proposal for await syntax


This is an announcement regarding the resolution of the syntax for the await operator in Rust. This is one of the last major unresolved questions blocking the stabilization of the async/await feature, a feature which will enable many more people to write non-blocking network services in Rust. This post contains information about the timeline for the final decision, a proposal from the language team which is the most likely syntax to be adopted, and the justification for this decision.…