Patterns & Abstractions

This is the second post in an informal series commenting on the design of async Rust in 2023. In my previous post, after a discussion of the “registers” in which control-flow effects could be handled in Rust, I promised to turn my attention to recent proposals around a concept called “keyword generics.” For reference, there are two posts by the current design team of Rust that are my reference point for this commentary:

I’m not going to reiterate these blog posts at length, but in brief “keyword generics” is a proposal to introduce a new kind of abstraction to Rust, to allow types to be abstracted over certain effects. The examples have focused on async and const as the effects, but there is sometimes discussion of try as well. Astute readers will notice this is an overlapping but not identical set of effects to the effects I identified in my last post; I did not mention const as an effect, and as far as I know the keyword generics working group has not devoted much or any time to considering iteration as an effect.

It’s also important to note that two different versions of this proposal are present in these posts: in one, users can abstract over the presence or absence of a specific effect (e.g. ?async), in the more advanced form, users can abstract over the presence or absence of arbitrary effects (i.e. ?effect).

From my perspective, the reaction to these proposals has been at best mixed, with several prominent former members of the Rust project (including myself) publicly expressing doubt that this is a good direction forward for Rust. The critique has largely been leveled along two axes.

  • First, there is an obvious increase in cognitive burden as many library APIs become annotated with the a flurry of optional keyword they do or don’t support. With trait transformers, this annotation becomes even more varied, common, and multiple. Graydon Hoare excellently articulated this objection a few weeks ago, and it was similarly seconded by Patrick Walton on Twitter.
  • Second, the motivation seems weak, especially for async. There is an emphasis in these examples on the subset of code which is strictly linear except that it yields control while waiting for some concurrent event to occur. However, this doesn’t reflect a lot of the async code people write, which often involves concurrently multiplexing multiple operations within a single future, something which is not possible with await alone.

I agree with these objections, but I have a higher level objection as well: I believe this is a mistaken attempt to turn a pattern into an abstraction, and I think the net of which things to abstract under the same concept has been miscast. Before I can get into that, though, I want to take a moment to make an addendum to my previous post, which I hope will lead us down a productive avenue.

The so-called “consuming register”

Further reflection and discussion (especially comments from Gábor Lehel and Raph Levien) helped me realize that what I called the “consuming” register is not a register at all, but actually an orthogonal division within each control-flow effect. Each control flow effect has something like “parts of speech,” necessary points of contact with the effect which enable the compiler transformation. Each of these parts of speech can be performed in different registers:

  • Creating the context: The context for this control flow is created; the expression introduces the reified type for the first time.
  • Generating the effect: The operation that is only allowed within the context of this effect is performed (e.g. throwing an error in a fallible context).
  • Forwarding the effect: The effect generated by a subcomponent is yielded from this context into a further outer context.
  • Completing the effect: The effect is finally consumed and completed, exiting the context of the reified type.

The so-called “consuming register” was really just completing the effect, which could be performed in the control-flow register (e.g. with a for loop) or in the combinatoric register (e.g. with fold). Given this reorganization, it’s better to think of there as 3 different registers in which to manage control-flow effects in Rust:

  • The low-level register, in which things are done “by hand” with the most primitive constructs.
  • The functional register, in which things are done with the library-based functional/combinatoric abstractions.
  • The imperative register, in which things are done with the language-provided syntactic sugar control flow constructs.

The problem generators solve is that the imperative register for iteration is bound up with eagerly evaluating the iterator (i.e. using a for loop), forcing users into the functional register when they don’t want to eagerly evaluate the iterator. And while the functional register can get you very far, it has several limitations in Rust. First, as we’ve already discussed, you can’t escape from the closure used in a combinator (keyword generics is intended to release this limitation in some specific instances). Second, as I alluded to briefly but didn’t elaborate on, you can’t mutate state across a sequence of combinators, even though those state mutations will not actually alias, because their sequential operation does not inhere in the signatures of the types. This is the big reason that combinators were so limited for async code.

There’s certainly an argument to be made that these limitations are unfortunate, and if there were a design that avoided them at no additional cost it would probably be hard to argue against it. But if the imperative register were fully fleshed out by the addition of the necessary control flow constructs, the problem could be side-stepped by encouraging the imperative style for code which early returns or mutates state - two things which are hallmarks of the imperative style. Is it such an outlandish suggestion that we (This little biblical reference has layers. “Imperative” programming comes from the same root as “Imperator,” a title of Caesar, and the “Monad” was an ancient term for a certain conception of the divine. In ways I won’t unsettle here further, I don’t think this is strictly a neat coincidence, but the difference between stateful and stateless styles of programming have deeper epistemological roots.)

Back to the point: it seems like the most important and compelling motivation for abstracting over async and try with keyword generics has been the problem of Iterator combinators. This problem is too narrowly stated though: the real problem is combining the iterative effect with fallibility and asynchrony, given that right now the only way to “stay in” the iterative effect is to use combinators. My claim is that we can fully outflank this problem by implementing generators, and not using combinators for code with complex control flow paths.

Patterns of control-flow effects

Having identified the “parts of speech” in dealing with a control-flow effect, it’s worth investigating how they could each be expressed in the imperative style, if the features were fleshed out. For some of these features, the syntax here is made up (and could be different), but the control flow it implies is hopefully obvious:

             │  ASYNCHRONY      │  ITERATION   │  FALLIBILITY
    ─────────┼──────────────────┼──────────────┼──────────────
             │                  │              │      
     CONTEXT │  async { }       │  gen { }     │  try { }
             │                  │              │      
      EFFECT │                  │  yield       │  throw
             │                  │              │      
     FORWARD │  .await          │  yield from  │  ?
             │                  │              │      
    COMPLETE │  spawn/block_on  │  for         │  match
             │                  │              │      

What emerges here is a very clear pattern, by which an analogy between asynchrony, iteration and fallibility can be drawn. You can see how an async block is like a generator block is like a try block. You can see how await is like ? is like yielding from another iterator. And so on.

But what also emerges is a great deal of irregularity between the different patterns. Even surface level differences like syntactic anomalies tend to arise out of a deeper dissimilarity in the semantics of each of these effects.

For example, fallibility - unlike asynchrony and iteration - returns a specific concrete type, rather than an anonymous type implementing a specific trait. And there are different possible concrete types (e.g. Result or Option). This is because fallibility doesn’t implement the same kind of repetitious control flow pattern that asynchrony and iteration, and so doesn’t need to be compiled into a state machine for optimization the way that they do. This semantic difference impacts the syntax as well as the manner of composition of this effect and the others.

Similarly, asynchrony weds the state machine generated by the async syntax with an underlying task system that async functions ultimately call into to manage asynchronous events. This task system is made invisible in the syntax of async/await, but it is very much present in the low-level register, and it has a major impact on the surface design as well (for example, it’s why there’s no entry for the “effect” row in asynchrony, and why the completion of the effect is a library function rather than a built-in operator). The necessary integration of this separate system into the abstraction has no analogy in iteration and fallibility.

Furthermore, iteration is differentiated from the other two by virtue of a kind of “inversion.” In both asynchrony and fallibility, the “happy path” is when the effect does not arise and does not need to be forwarded - when the future is ready or when the result is Ok. For iteration, meaningful work is done on the values that are yielded when the effect occurs, and iteration ceases when the effect no longer occurs. This is why yield from would evaluate to () and therefore not have the same motivation for being postfix as await and ?, and also why using a for loop with yields inside of it is common in iteration rather than forwarding directly, making the direct forwarding operator relatively unimportant.

To abstract a pattern, or not

These are only some of the differences that could be identified, but from them alone one is left to ask: what are we to make of this? The pattern here is clearly irregular and variegated across the different cases, not cleanly and appealingly consistent. One effect of this is that it is challenging to create an abstraction across these three similar sets of operators, because an abstraction needs to be regular. To create an abstraction, they would need to forced into place unnaturally, and what would result would likely be what we usually term a “leaky abstraction.”

There’s a famous quip that patterns are a sign of a weak language, because a more expressive language would have an abstraction for that instead. I think this perspective lacks nuance. Yes, when Java users developed books full of patterns to account for the fact that the language initially lacked generics or higher order functions, it seems fair to say that this was a weakness of the abstraction capabilities of Java that was good to remedy. But I would contend that not all patterns need to be turned into an abstraction.

A big difference between patterns and abstractions is the way they structure the cognitive load on the user. Abstractions “front load” the cognitive load by requiring you to learn about the abstraction and form a mental model of how it works before you can comprehend any application of the abstraction. This is the source of a lot of the apprehension about keyword generics, people see these ?async and ?const annotations all over everything and imagine the reaction a new user will have, when these signatures already express so much. A pattern, on the other hand, is implicit (because it isn’t a “real” part of the language), and can be learned eventually, not up front.

Of course, this means a pattern doesn’t prevent code duplication, and this is a lot of the benefit of an abstraction. It can reduce the cognitive load of having to learn about multiple similar APIs. However, patterns can also ease this cognitive load by making the analogy between two similar but different APIs clearer. For example, users can understand async, gen, and try blocks as analogous constructs, even if they can’t all be abstracted under a single language concept. In languages with too much abstraction and too few patterns, the abstraction can make a user feel rudderless, unsure of how to proceed. Too purely abstractive of a language can feel like an undifferentiated space of infinite possibility, where everything is possible but nothing is obvious - a messy, disorganized toolbox.

Moreover, the very fact that patterns are not abstractions allows them to vary in their specifics in ways that abstractions cannot. All of the irregularities I highlighted above are well-justified by the specific concern each application of the pattern is addressing, and normalizing them to create an abstraction carries the cost of eliminating the benefits that the irregularities have provided. We chose these irregularities intentionally, and for good reason, and making them regular for the sake of abstraction could make each individual feature worse.

The right pattern?

Sometimes, it is not a good idea to turn a pattern into an abstraction, but sometimes the pattern you’ve identified isn’t even the best pattern. As I mentioned earlier, the set of effects to be abstracted by keyword generics is not identical to the set of control-flow effects I’ve identified here. Iteration has not been incorporated as far as I’m aware, and “const-ness” (or compile-time executability) has.

All of the descriptions of keyword generics I’ve seen have focused on code which simply “forwards” the effect from an underlying type that they manipulate. That is, they call a function, and .await it (or not) and ? it (or not) depending on its effectfulness. Here, the difference of iteration I’ve noticed earlier comes into play. This seems rather straightforward because the happy path is the non-effectful path. However, as I mentioned previously, with iteration this is different. The happy path there is the path of iteration, and it seems a lot less clear to abstract over an iterative or non-iterative function. The differences between them are too dramatic.

On the other hand, I find the analogy between async/try and const to be even more stretched. Const can be conceptualized as an effect in some way, but it bares striking differences from the control-flow effects I have been discussing so far, such that the pattern I’ve identified doesn’t make sense for const.

In one sense, const is opposed to the control-flow effects: it strictly reduces the set of operators that can be permitted in that context (only those which can be executed at compile time), rather than introducing new operators that can be used, as in the case of the control-flow effects. In another sense, it seems completely orthogonal: const has no impact on the control flow of a piece of code - how it runs - instead it determines when it runs.

That is to say, const is not syntactic sugar for a compiler-expansion of your code the way control-flow effects are, which is why it operates so differently. Given that, it’s not a surprise that the annotation for const and async have functioned differently in ways highlighted last year by Yoshua Wuyts. This is not an irregularity in need of normalizing, but a difference that arises from the difference in use.

A lot of the idea of keyword generics seems to have been taken from work on solving a specific problem with const (originally described in RFC 2632), and then applied to async. When applied to const alone, this idea seems to me to make a bit more sense. The problem that arises for const is that there is no way to specify that a trait method can be called in a const context, making it impossible to do things like use for loops in a const context. This motivation strikes me as more compelling than trying to write methods that are “maybe async.”

But I think the grouping of const with the control-flow effects has been mistaken. In the trait transformers post, a new grouping emerges, that I think could bare more fruit. Here, an analogy is made between const and the auto traits: just as a trait method could be const or not, a trait method’s state (in the case of async and generators) could also be Send or not. (An analogy is also made in this post to async, but I think this analogy is mistaken for reasons I’ve already outlined here.)

Conclusion

Unfortunately, once again this post is getting quite long. In my next post, I will return to this analogy between const and auto traits, and explore how the problem solved by keyword generics for const could be solved in other ways based on that comparison.

For now, I hope I will leave you with these impressions: patterns are good, even if they can’t be abstracted, and sometimes it is wrong to abstract patterns. In the specific case of control-flow effects, the pattern is not obviously abstractable because of the variation between the different effects, but this allows each effect to be fit for purpose, and they can be composed without abstracting them by the natural composition of imperative control-flow operators, if the imperative register of these effects would just be implemented fully.

(Many thanks to Graydon Hoare for reviewing an earlier draft of this post.)