One of the main emphases of my recent posts has been that I believe shipping generators would solve a lot of user problems by making it easy to write imperative iterative code, and especially to make that iterative code interact well with asynchrony and fallibility as well. One thing that frustrates me about the situation is that generators have been nearly ready to ship for years now, but very little visible progress has been made. In particular, the core compiler transform to take a generator and produce a state machine already exists, because it’s exactly how async functions are implemented.

I want to collect all of the decisions that need to be made before generators can ship, and also my opinions about those questions. In 2019, I wrote two posts about generators, and in 2020 I created a library called propane, implementing my idea of how to solve some of the remaining design questions. I also had a chat in early 2021 with a member of the Rust project who intended to take over the work on generators, but I never saw any public work as a result of that conversation. Unfortunately, this post will be a re-hash of my posts from 2019 and 2020 in many ways, because not much has changed.

I want to reiterate a point I’ve made before, which is that the feature I want to see shipped in Rust soon is an imperative syntax for writing functions that compile to iterators. This is what I call “generators;” the more general feature which can be supported by the same compiler transform - state machines that act as “resumable functions,” I prefer to call “stackless coroutines;” the big difference is that stackless coroutines can both return and receive values as they are paused and resumed, and so they are more flexible. But this also means they aren’t strictly suitable for iteration, and they have other remaining design issues to resolve. I don’t think stackless coroutines are a bad feature, but I don’t think they should be given the same priority as the narrower generators-for-iterators feature, and I think the syntax should be different between these features in the same way its different for async functions.

One confusing point is that the unstable trait for the more general stackless coroutine mechanism is called Generator; my opinion is that either this trait should be renamed or someone should come up with a name for functions that return iterators that isn’t “generator.” I’ll be using generator to mean functions that return iterators for the rest of this post.

Syntax questions


The current unstable generators feature doesn’t have exactly the syntax that generators should probably have. First of all, it only supports closures as generators; this may make sense for a stackless coroutine feature, but just as there are async functions users will want to be able to define generator functions as well. The other big problem is that it doesn’t have any explicit declaration syntax; a closure is a generator if it contains a yield statement, and that’s it.

Instead, there should be a syntax, like async, that can be applied to both functions and blocks, and performs the generator transform on them, so they evaluate to iterators instead of evaluating normally. There have been a lot of different ideas for what this syntax could be, but I think the simplest and most straightforward thing would be to pick a keyword that comes before the fn or the block, exactly the way that async works. The most obvious keyword to me is gen, but maybe there are others, and it also depends on what the feature is called.

Unfortunately, no keyword was reserved for generators in the 2021 edition the way that async was reserved in the 2018 edition. This means that generators won’t have a proper keyword until at least the 2024 edition. Some users have suggested it could be stabilized anyway using the “raw keywords” feature; for example, it could stabilize using a syntax like k#gen. Personally, although I think raw keywords are nice to be able to use new features on old editions, I would be very unhappy to see this feature ship requiring the raw keyword syntax in all editions, and so I’d rather see generators shipped in 2024, alongside the 2024 edition, which reserves a proper keyword.

yield from

It’s already pretty much decided that the keyword for yielding an item from a generator is the yield keyword, which has been a reserved keyword since before the 2015 edition of Rust. However, one thing that hasn’t received much attention is the notion of a “yield from” operator - an expression that takes another iterator and yields from it until it completes. This is an “effect-forwarding” operator and it’s analogous to ? or await.

My own opinion is that this feature should be punted to the future, and maybe never even implemented. The desugaring is trivial - it’s consuming the iterator with a for loop and then yielding each item. This is an example of how treating control-flow effects as a pattern, rather than an abstraction, is useful: the async and try features would be nothing without their forwarding operator, but the forwarding operator is not very important to generators.

Function return type

Like async function, generators return an anonymous type which implements a trait (Future, Iterator, or AsyncIterator). For async functions, the decision was made to make the syntax not include this anonymous type (called the “outer” return type) and only include the type that is yielded up by that state machine (called the “inner” return type). In my opinion, this was completely the right decision, but since some contributors have continued to express doubts, I suspect they may want to re-litigate this issue as it applies to generators.

All of the reasons that applied to async functions that led us to include only the inner return type also apply to generators. It would be extremely verbose (e.g. -> impl Iterator<Item = T> + ‘a), it would either mean a new lifetime sugar that everyone would need to learn or it would mean no lifetime elisions in these functions, and it would contain very little non-redundant information. It would be a very bad choice.

The specific advantage that people cite is that it would provide a syntax for declaring that the return type is Send. This was a known issue when the inner return type was chosen, and I can reiterate the counter-arguments again. While it enables the function declarer to say that the return type is Send, because impl Trait has “auto trait transparency,” users would still need a way when they use one of these functions (especially with methods in certain generic contexts) to bound them Send not at the declaration site but at the use site.

In fact, my opinion is that at the use site is almost always where the Send bound wants to be declared, almost never at the declaration site. So some syntax like “return type notation” will be necessary anyway. The alleged benefit could instead be achieved with a where bound on the declaration site, or a macro that adds that where bound, or some specific syntax. And, since the decision has already been made for async functions to use the inner return type, this would just introduce inconsistency with no clear benefit in my opinion.

Note that this is different from opinion about “try functions,” which I think make sense to show the “outer” return type because basically every factor is the opposite: the outer return type is not very verbose, it doesn’t conflict with lifetime annotations, and it contains a lot of useful information (because these functions return a specific type and there are multiple possibilities, not an impl Trait). Again, this is an example of how allowing for differences in an application of a pattern can be valuable.

Semantic questions

Return expressions and ?

The first semantic question that needs to be addressed is how to handle return expressions in a generator. There are two cases that need to be handled: the generator is over, has nothing left to yield, and the generator has one last thing to yield, and then it’s over.

In my opinion, it would be ill-advised to have some kind of “fallback-like” mechanism here in which the generator’s return expression could take multiple types (i.e. that it could either be () or the yielded type of the generator). These kinds of fallback mechanisms result in type inference problems and complicate the implementation. So there are two options, in my opinion:

  • Returns in generators take an expression with type ()
  • Returns in generators take an expression with type Option<T>

The advantage of the second is that when you want to yield something and then return, you can do this in one expression. The disadvantage, though, is that generators always need to return an option, which means (for example) the terminal expression of a generator needs to be an Option<T>. This would mean adding a lot of None statements to the end of the generator. In my opinion, having generator return expressions take () is preferable, because this avoids that problem, and writing a yield followed by return statement when you want to stop yielding is just clearer in my view.

The only problem with this is what to do about the ? operator. In non-generator functions, the ? operator expands to an early return of the error value. This doesn’t work if generators always return ?. This means that if ? is to work in generator functions, the desugaring has to change. Note that even if generators return Option, the ? desugaring is going to have to change in some way anyway if you want to be able to “throw” errors in a generator, because the return expression would take Option<Result<T>>.

The obvious choice (and what I implemented in propane) is to have ? in a generator desugar to a two step operation in the error case: first it yields the error, then it returns, finishing the iterator. This combination (return expressions take () and ? desugars to a yield followed by a return) is what I did in propane.


The thornier problem with generators - really the big open problem of the feature - is the problem of self-references. Fundamentally, the problem is that Iterator::next does not take a pinned reference to the iterator state, and therefore the iterator state cannot contain self-references. This is in contrast to Future:poll, which does take a pinned reference to the future state. The compiler already supports both self-referential coroutines and coroutines which cannot be self-referential. The problem is that self-references are a nice feature, the Iterator trait is stable, and it does not support them.

Note that the AsyncIterator trait is both unstable and currently takes a pinned reference to its state, so it can contain self-references. However, I have seen some people throw out ideas to make it “more consistent” with Iterator by making it only possible to hold self-references across await points and not yield points; these ideas are connected to the idea of defining AsyncIterator with an async next method. I think this would be a bad choice and demonstrates the flaw of over-pivoting on the idea of treating control-flow effects as a completely normalized abstraction rather than a pattern with some irregularities.

There are basically 3 options to solve the problem of self-referential generators:

  1. Generators which are not async cannot contain self-references. The compiler enforces this, and they implement the Iterator trait.
  2. Generators which are not async do not return an Iterator, but something else instead. This something else can be turned into an Iterator by pinning it in place first.
  3. Deprecate the Iterator trait and move the ecosystem and everything that is built on Iterator to a new trait.

There are some variations on these ideas which are in effect a combination of them. For example, generators by default could not take self-references, but an additional syntax could exist which allows them to, but requires the user to pin them. Or vice versa. Since these combine two of the options, it’s also possible to first choose one option and later extend it to the new option.

In my opinion, the third option is not realistic. I think the Rust project should pick one of these first two options, stabilize that, and possibly someday introduce a syntax for the other of the first two options if it turns out the flexibility is really valuable. The question remains then, which is the best option to be the “default” choice.

My suspicious is that the first option is probably the best approach. It involves the least invasive changes, it is the most intuitive, and all it means is that generators do not gain certain expressivity that iterators have never had in the first place. The main evidence I have to support this is that Iterator has been unable to support self-references its entire existence, and this rarely appears as a problem, but even when people were using Future combinators back in 2017 the problem of them lacking self-references was evident immediately.

The fact that iterators tend to be short lived means it’s usually fine to let state you need to reference exist outside of them. Possibly problems will emerge from how people use generators now that it is easier to implement more complex iterators than it is today, but it’s hard to say that without people actually using the generator feature. I’ll note that this difference in how they are generally used is also a big difference between iterators and async iterators: since an AsyncIterator is essentially always doing some kind of IO or coordination, it tends to live longer and need to own more complex state than an Iterator does (again, the value of patterns over abstractions appears here).

If I could go back in time, I would just have Iterator take a pinned reference to itself. This isn’t the only trait I wish could be changed like this - the problem with Drop not being pinned is even worse in my view. But pinning didn’t exist when these traits were stabilized, and it’s not possible now to change them, even across edition boundaries. Alas.


This concludes my series on the problem of combining control-flow effects in Rust, and specifically the issue of making iteration, fallibility, and asynchrony compose well. This has been one of the biggest problems facing async Rust since the MVP was released in 2019, because the specifics integrating asynchrony and iteration were left unresolved in the MVP. My opinion, as I have tried to elaborate over this series, is that the simplest, best and fastest approach - in my view, the only approach that’s compatible with Rust in its current phase of development - is to ship generators in the 2024 edition to solve iteration in a more composable way without making big changes to the language.

I’ve had a mixed experience blogging about Rust again. I’ve received lots of positive and very affirmative feedback from users - to be honest, I often don’t know how to handle the kind of effusive praise these posts sometimes get, but thankyou to everyone who responded (including critically!) regardless. The response from the people who currently contribute to Rust has seemed more tepid, even defensive. In my current position, all I can do is lay out what I think Rust should do and why, and hope my input is considered in good faith.

I’m not sure how soon it will be, but I intend to pick up another series of posts about async Rust in the future. In my next series, I will explore the problems with async Rust’s existing concurrency primitives, with special attention to the idea that “structured concurrency” could provide a better approach.