Rust: [Stabilization] async/await MVP

Created on 26 Jun 2019  Â·  58Comments  Â·  Source: rust-lang/rust

Stabilization target: 1.38.0 (beta cut 2019-08-15)

Executive Summary

This is a proposal to stabilize a minimum viable async/await feature, which includes:

  • async annotations on functions and blocks, causing them to be delayed in evaluation and instead evaluate to a future.
  • An await operator, valid only within an async context, which takes a future as an argument and causes the outer future it is within to yield control until the future being awaited has completed.

Related previous discussions

RFCs:

Tracking issues:

Stabilizations:

Major decisions reached

  • The future that an async expression evaluates to is constructed from its initial state, running none of the body code before yielding.
  • The syntax for async functions uses the "inner" return type (the type that matches the internal return expression) rather than the "outer" return type (the future type that a call to the function evaluates to)
  • The syntax for the await operator is the "postfix dot syntax," expression.await, as opposed to the more common await expression or another alternative syntax.

Implementation work blocking stabilization

  • [x] async fns should be able to accept multiple lifetimes #56238
  • [x] generators size should not grow exponentially #52924
  • [ ] Minimal viable documentation for the async/await feature
  • [ ] Sufficient compiler tests of the behavior

Future work

  • Async/await in no-std contexts: async and await currently rely on TLS to work. This is an implementation issue that is not a part of the design, and though it is not blocking stabilization it is intended to be resolved eventually.
  • Higher order async functions: async as a modifier for closure literals is not stabilized here. More design work is needed regarding capture and abstraction over async closures with lifetimes.
  • Async trait methods: This involves significant design and implementation work, but is a highly desirable feature.
  • Stream processing: The pair to the Future trait in the futures library is the Stream trait, an asynchronous iterator. Integrating support to manipulating streams into std and the language is a desirable long term feature.
  • Optimizing generator representations: More work can be done to optimize the representation of generators to make them more perfectly sized. We have ensured that this is strictly an optimization issue and is not semantically significant.

Background

Handling non-blocking IO is very important to developing high performance network services, a target use case for Rust with significant interest from production users. For this reason, a solution for making it ergonomic and feasible to write services using non-blocking IO has long been a goal of Rust. The async/await feature is the culmination of that effort.

Prior to 1.0, Rust had a greenthreading system, in which Rust provided an alternative, language-level threading primitive built on top of nonblocking IO. However, this system caused several problems: most importantly introducing a language runtime that impacted the performance even of programs that did not use it, adding significantly to the overhead of FFI, and having several major unresolved design problems to do with the implementation of greenthread stacks.

After the removal of greenthreads, members of the Rust project began working on an alternative solution based on the futures abstraction. Sometimes also called promises, futures had been very successful in other languages as a library-based abstraction for nonblocking IO, and it was known that in the long term they mapped well to an async/await syntax which could make them only minorly less convenient than a completely invisible greenthreading system.

The major breakthrough in the development of the Future abstraction was the introduction of a poll-based model for futures. While other languages use a callback based model, in which the future itself is responsible for scheduling the callback to be run when it is complete, Rust uses a poll based model, in which an executor is responsible for polling the future to completion, and the future merely informing the executor that it is ready to make further progress using the Waker abstraction. This model worked well for several reasons:

  • It enabled rustc to compile futures to state machines which had the most minimal memory overhead, both in terms of size and indirection. This has significant performance benefits over the callback based approach.
  • It allows components like the executor and reactor to exist as library APIs, rather than a part of the language runtime. This avoids introducing global costs that impact users who are not using this feature, and allows users to replace individual components of their runtime system easily, rather than requiring us to make a blackbox decision for them at the language level.
  • It makes all concurrency primitives libraries as well, rather than baking concurrency into the language through the semantics of the async and await operators. This makes concurrency clearer and more visibile through the source text, which must use an identifiable concurrency primitive to introduce concurrency.
  • It allows for cancellation without overhead, by allowing executing futures to be dropped before they are completed. Making all futures cancellable for free has performance and code clarity benefits for executors and concurrency primitives.

(The last two points have also been identified as a source of confusion for users coming from other languages in which they are not true, and bringing expectations from those languages with them. However, these properties are both unavoidable properties of the poll-based model which has other clear advantages and are, in our opinion, beneficial properties once users understand them.)

However, the poll-based model suffered from serious ergonomic issues when it interacted with references; essentially, references across yield points introduced unresolvable compilation errors, even though they should be safe. This resulted in complex, noisy code full of arcs, mutexes, and move closures, none of which was strictly necessary. Even setting this problem aside, without language level primitive, futures suffered from forcing users into a style of writing highly nested callbacks.

For this reason, we pursued async/await syntactic sugar with support for normal use of references across yield points. After introducing the Pin abstraction which made references across yield points safe to support, we have developed a native async/await syntax which compiles functions into our poll-based futures, allowing users to get the performance advantages of asynchronous IO with futures while writing code which is very similar to standard imperative code. That final feature is the subject of this stabilization report.

async/await feature description

The async modifier

The keyword async can be applied in two places:

  • Before a block expression.
  • Before a free function or an associated function in an inherent impl.

_(Other locations for async functions - closure literals and trait methods, for example, will be developed further and stabilized in the future.)_

The async modifier adjusts the item it modifies by "turning it into a future." In the case of a block, the block is evaluated to a future of its result, rather than its result. In the case of a function, calls to that function return a future of its return value, rather than its return value. Code inside an item modified by an async modifier is referred to as being in an async context.

The async modifier performs this modification by causing the item to instead be evaluated as a pure constructor of a future, taking arguments and captures as fields of the future. Each await point is treated as a separate variant of this state machine, and the future's "poll" method advances the future through these states based on a transformation of the code the user wrote, until eventually it reaches its final state.

The async move modifier

Similar to closures, async blocks can capture variables in the surrounding scope into the state of the future. Like closures, these variables are by default captured by reference. However, they can instead be captured by value, using the move modifier (just like closures). async comes before move, making these blocks async move { } blocks.

The await operator

Within an async context, a new expression can be formed by combining an expression with the await operator, using this syntax:

expression.await

The await operator can only be used inside an async context, and the type of the expression it is applied to must implement the Future trait. The await expression evaluates to the output value of the future it is applied to.

The await operator yields control of the future that the async context evaluates to until the future it is applied to has completed. This operation of yielding control cannot be written in the surface syntax, but if it could (using the syntax YIELD_CONTROL! in this example) the desugaring of await would look roughly like this:

loop {
    match $future.poll(&waker) {
        Poll::Ready(value)  => break value,
        Poll::Pending       => YIELD_CONTROL!,
    }
}

This allows you to wait for futures to finish evaluating in an async context, forwarding the yielding of control through Poll::Pending outward to the outermost async context, ultimately to the executor onto which the future has been spawned.

Major decision points

Yielding immediately

Our async functions and blocks "yield immediately" - constructing them is a pure function that puts them in an initial state prior to executing code in the body of the async context. None of the body code gets executed until you begin polling that future.

This is different from many other languages, in which calls to an async function trigger work to begin immediately. In these other languages, async is an inherently concurrent construct: when you call an async function, it triggers another task to begin executing concurrent with your current task. In Rust, however, futures are not inherently executed in a concurrent fashion.

We could have async items execute up to the first await point when they are constructed, instead of making them pure. However, we decided this was more confusing: whether code is executed during constructing the future or polling it would depend on the placement of the first await in the body. It is simpler to reason about for all code to be executed during polling, and never during construction.

Reference:

Return type syntax

The syntax of our async functions uses the "inner" return type, rather than the "outer" return type. That is, they say that they return the type that they eventually evaluate to, rather than saying that they return a future of that type.

On one level, this is a decision about what kind of clarity is preferred: because the signature also includes the async annotation, the fact that they return a future is made explicit in the signature. However, it can be helpful for users to see that the function returns a future without having to notice the async keyword as well. But this also feels like boilerplate, since the information is conveyed also by the async keyword.

What really tipped the scales for us was the issue of lifetime elision. The "outer" return type of any async function is impl Future<Output = T>, where T is the inner return type. However, that future also captures the lifetimes of any input arguments in itself: this is the opposite of the default for impl Trait, which is not assumed to capture any input lifetimes unless you specify them. In other words, using the outer return type would mean that async functions never benefited from lifetime elision (unless we did something even more unusual like having lifetime elision rules work differently for async functions and other functions).

We decided that given how verbose and frankly confusing the outer return type would actually be to write, it was not worth the extra signalling that this returns a future to require users to write it.

Destructor ordering

The ordering of destructors in async contexts is the same as in non-async contexts. The exact rules are a bit complicated and out of scope here, but in general, values are destroyed when they go out of scope. This means, though, that they continue to exist for some time after they are used until they get cleaned up. If that time includes await statements, those items need to be preserved in the state of the future so their destructors can be run at the appropriate time.

We could, as an optimization to the size of future states, instead re-order destructors to be earlier in some or all contexts (for example, unused function arguments could be dropped immediately, instead of being stored in the state of the future). However, we decided not to do this. The order of destructors can be a thorny and confusing issue for users, and is sometimes very significant for program semantics. We've chosen to forego this optimization in favor of guaranteeing a destructor ordering that is as straightforward as possible - the same destructor ordering if all of the async and await keywords were removed.

(Someday, we may be interested in pursuing ways of marking destructors as pure and re-orderable. That is future design work that has implications unrelated to async/await as well.)

Reference:

Await operator syntax

One major deviation from other languages' async/await features is the syntax of our await operator. This has been the subject of an enormous amount of discussion, more than any other decision we've made in the design of Rust.

Since 2015, Rust has had a postfix ? operator for ergonomic error handling. Since long before 1.0, Rust has also had a postfix . operator for field access and method calls. Because the core use case for futures is to perform some sort of IO, the vast majority of futures evaluate to a Result with some
sort of error. This means that in practice, nearly every await operation is sequenced with either a ? or a method call after it. Given the standard precedence for prefix and postfix operators, this would have caused nearly every await operator to be written (await future)?, which we regarded as highly unergonomic.

We decided therefore to use a postfix syntax, which composes very well with the ? and . operators. After considering many different syntactic options, we chose to use the . operator followed by the await keyword.

Reference:

Supporting both single and multithreaded executors

Rust is designed to make writing concurrent and parallel programs easier without imposing costs on people writing programs that run on a single thread. It's important to be able to run async functions both on singlethreaded executors and multithreaded executors. The key difference between these two use cases is that multithreaded executors will bound the futures they can spawn by Send, and singlethreaded executors will not.

Similar to the existing behavior of impl Trait syntax, async functions "leak" the auto traits of the future they return. That is, in addition to observing that the outer return type is a future, the caller can also observe if that type is Send or Sync, based on an examination of its body. This means that when the return type of an async fn is scheduled onto a multithreaded executor, it can check whether or not this is safe. However, the type is not required to be Send, and so users on singlethreaded executors can take advantage of more performant single-threaded primitives.

There was some concern that this would not work well when async functions were expanded into methods, but after some discussion it was determined that the situation would not be significantly different.

Reference:

Known stabilization blockers

State size

Issue: #52924

The way the async transformation to a state machine is currently implemented not at all optimal, causing the state to become much larger than necessary. It's possible, because the state size actually grows superlinearly, to trigger stack overflows on the real stack as the state size grows larger than the size of a normal system thread. Improving this codegen so that the size is more reasonable, at least not bad enough to cause stack overflows in normal use, is a blocking bug fix.

Multiple lifetimes in async functions

Issue: #56238

async functions should be able to have multiple lifetimes in their signature, all of which are "captured" in the future the function is evaluated to when it is called. However, the current lowering to impl Future inside the compiler does not support multiple input lifetimes; a deeper refactor is needed to make
this work. Because users are very likely to write functions with multiple (probably all elided) input lifetimes, this is a blocking bug fix.

Other blocking issues:

Label

Future work

All of these are known and very high priority extensions to the MVP that we intend to pick up work on as soon as we have shipped the initial version of async/await.

Async closures

In the initial RFC, we also supported the async modifier as a modifier on closure literals, creating anonymous async functions. However, experience using this feature has shown that there are still a number of design questions to resolve before we feel comfortable stabilizing this use case:

  1. The nature of variable capture becomes more complicated in async closures and make require some syntactic support.
  2. Abstracting over async functions with input lifetimes is currently not possible and may require some additional language or library support.

No-STD support

The current implementation of the await operator requires TLS to pass the waker downward as it polls the inner future. This is essentially a "hack" to make the syntax work on systems with TLS as soon as possible. In the long term, we have no intention of committing to this usage of TLS, and would prefer to pass the waker as a normal function argument. However, this requires deeper changes to the state machine generation code so that it can handle taking arguments.

Though we are not blocking on implementing this change, we do consider it a high priority as it prevents using async/await on systems without TLS support. This is a pure implementation issue: nothing in the design of the system requires TLS usage.

Async trait methods

We do not currently allow async associated functions or methods in traits; this is the only place in which you can write fn but not async fn. Async methods would very clearly be a powerful abstraction and we want to support them.

An async method would functionally be treated as a method returning an associated type that would implement future; each async method would generate a unique future type for the state machine that that method translates into.

However, because that future would capture all input, any input lifetime or type parameters would need to be captured in that state as well. This is equivalent to a concept called generic associated types, a feature we have long wanted but have not yet properly implemented. Thus, the resolution of async methods is tied to the resolution of generic associated types.

There are also outstanding design issues. For example, are async methods interchangeable with methods returning future types that would have the same signature? Additionally, async methods present additional issues around auto traits, since you may need to require that the future returned by some async method implements an auto trait when you abstract over a trait with an async method.

Once we have even this minimal support, there are other design considerations for future extensions, like the possibility of making async methods "object safe."

Generators and async generators

We have an unstable generator feature using the same coroutine state machine transformation to take functions which yield multiple values and turn them into state machines. The most obvious use case for this feature is to create functions that compile to "iterators," just as async functions compile to
futures. Similarly, we could compose these two features to create async generators - functions that compile to "streams," the async equivalent of iterators. There are really clear use cases for this in network programming, which often involves streams of messages being sent between systems.

Generators have a lot of open design questions because they are a very flexible feature with many possible options. The final design for generators in Rust in terms of syntax and library APIs is still very up in the air and uncertain.

A-async-await AsyncAwait-Focus F-async_await I-nominated T-lang disposition-merge finished-final-comment-period

Most helpful comment

The final comment period, with a disposition to merge, as per the review above, is now complete.

As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed.

The RFC will be merged soon.

All 58 comments

@rfcbot fcp merge

Team member @withoutboats has proposed to merge this. The next step is review by the rest of the tagged team members:

  • [x] @Centril
  • [x] @cramertj
  • [x] @eddyb
  • [x] @joshtriplett
  • [x] @nikomatsakis
  • [ ] @pnkfelix
  • [x] @scottmcm
  • [x] @withoutboats

Concerns:

Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!

See this document for info about what commands tagged team members can give me.

(Just registering the existing blockers in the report above to make sure they don't slip)

@rfcbot concern implementation-work-blocking-stabilization

Team member ... has proposed to merge this

How can one merge a Github issue (not a pull request)?

@vi The bot is just a bit daft and doesn't check whether it's an issue or PR :) You can replace "merge" with "accept" here.

Wow, thank you for the comprehensive summary! I've only been following tangentially, but am completely confident you're on top of everything.

@rfcbot reviewed

Could it be possible to explicitly add “Triage AsyncAwait-Unclear issues” to the stabilization blockers (and/or register a concern for that)?

I've got https://github.com/rust-lang/rust/issues/60414 that I think is important (obviously, it's my bug :p), and would like to at least have it explicitly deferred before stabilization :)

I'd just like to express the community thanks for the effort the Rust teams have put into this feature! There's been a lot of design, discussion, and a few breakdowns in communication, but at least I, and hopefully many others, feel confident that through it all we've found the best solution possible for Rust. :tada:

(That said, I'd like to see a mention of the problems with bridging to completion-based and async-cancellation system APIs in future possibilities. TL;DR they still have to pass around owned buffers. It's a library issue, but one with mentioning.)

I also would like to see a mention of problems with completion-based APIs. (see this internals thread for context) Considering IOCP and introduction of io_uring, which may become The Way for async IO on Linux, I think it's important to have a clear way forward for handling them. IIUC hypothetical async drop ideas can not be implemented safely, and passing owned buffers will be less convenient and potentially less performant (e.g. due to a worse locality or due to additional copies).

@newpavlov I've implemented similar things for Fuchsia, and it is entirely possible to do with out async drop. There are a few different routes to doing this, such as using resource pooling where acquiring a resource potentially has to wait for some cleanup work to finish on old resources. The current futures API can and has been used to solve these problems effectively in production systems.

However, this issue is about the stabilization of async/await, which is orthogonal to the futures API design, which has already stabilized. Feel free to ask further questions or open an issue for discussion on the futures-rs repo.

@Ekleog

Could it be possible to explicitly add “Triage AsyncAwait-Unclear issues” to the stabilization blockers (and/or register a concern for that)?

Yup, that's something we've been doing every week. WRT that specific issue (#60414), I believe it's important and would love to see it fixed, but we haven't yet been able to decide whether or not it should block stabilization, especially since it's already observable in -> impl Trait functions.

@cramertj Thank you! I think #60414 's issue is basically “the error can arise really quickly now”, while with -> impl Trait it looks like no one had even noticed it before -- then it's alright if it gets deferred anyway, some issues will have to :) (FWIW it arose in natural code in a function where I return both () at a place and T::Assoc at another, which IIRC made me unable to get it to compile -- haven't checked the code since opening #60414, though, so maybe my recollection is wrong)

@Ekleog Yeah that makes sense! I can definitely see why it'd be a pain-- I've created a zulip stream to dive more into that specific issue.

EDIT: never mind, I missed the 1.38 target.

@cramertj

There are a few different routes to doing this, such as using resource pooling where acquiring a resource potentially has to wait for some cleanup work to finish on old resources.

Aren't they less efficient compared to keeping buffers as part of future state? My main concern is that current design will not be a zero-cost (in a sense that you will be able create a more efficient code by dropping async abstraction) and less ergonomic on completion-based APIs, and there is no clear way for fixing it. It's not a show-stopper by any means, but I think it's important to not forget about such deficiencies in design, thus the request to mention it in OP.

@theduke

The lang team can of course judge this better than me, but delaying to 1.38 to ensure a stable implementation would seem much more sensible.

This issue targets 1.38, see first line of description.

@huxi thanks, I missed that. Edited my comment.

@newpavlov

Aren't they less efficient compared to keeping buffers as part of future state? My main concern is that current design will not be a zero-cost (in a sense that you will be able create a more efficient code by dropping async abstraction) and less ergonomic on completion-based APIs, and there is no clear way for fixing it. It's not a show-stopper by any means, but I think it's important to not forget about such deficiencies in design, thus the request to mention it in OP.

No, not necessarily, but let's move this discussion to an issue on a separate thread, since it's unrelated to the stabilization of async/await.

(That said, I'd like to see a mention of the problems with bridging to completion-based and async-cancellation system APIs in future possibilities. TL;DR they still have to pass around owned buffers. It's a library issue, but one with mentioning.)

I also would like to see a mention of problems with completion-based APIs. (see this internals thread for context) Considering IOCP and introduction of io_uring, which may become The Way for async IO on Linux, I think it's important to have a clear way forward for handling them.

I agree with Taylor that discussing API designs in this problem space would be off topic, but I do want to address one specific aspect of these comments (and this discussion around io_uring in general) that is relevant to async/await stabilization: the problem of timing.

io_uring is an interface that is coming to Linux this year, 2019. The Rust project has been working on the futures abstraction since 2015, four years ago. The fundamental choice to favor a poll based over a completion based API occurred during 2015 and 2016. At RustCamp in 2015, Carl Lerche talked about why he made that choice in mio, the underlying IO abstraction. In this blog post in 2016, Aaron Turon talked about the benefits for creating higher level abstractions. These decisions were made a long time ago and we could not have gotten to the point we are now without them.

Suggestions that we should revisit our underlying futures model are suggestions that we should revert back to the state we were in 3 or 4 years ago, and start over from that point. What kind of abstraction could cover a completion-based IO model without introducing overhead for higher level primitives, like Aaron described? How will we map that model to a syntax that lets users write "normal Rust + minor annotations" the way async/await does? How will we be able to handle integrating that into our memory model, as we've done for these state machines with pin? Trying to provide answers to these questions would be off-topic for this thread; the point is that answering them, and proving the answers correct, is work. What amounts to a solid decade of labor-years between the different contributors so far would have to be redone again.

The goal of Rust is to ship a product that people can use, and that means we have to ship. We can't always be stopping to look into the future at what may become a big deal next year, and restarting our design process to incorporate that. We do the best we can based on the situation we find ourselves in. Obviously it can be frustrating to feel like we barely missed a big thing, but as it stands we don't have a full view either a) of what the best outcome for handling io_uring will be, b) how important io_uring will be in the ecosystem as a whole. We can't revert 4 years of work based on this.

There are already similar, probably even more serious, limitations of Rust in other spaces. I want to highlight one I looked at with Nick Fitzgerald last fall: wasm GC integration. The plan for handling managed objects in wasm is to essentially segment the memory space, so that they exist in a separate address space from unmanaged objects (indeed, someday in many separate address spaces). Rust's memory model is simply not designed to handle separate address spaces, and any unsafe code that deals with heap memory today assumes there is only 1 address space. While we've sketched out both breaking and technically-nonbreaking-but-extremely-disruptive technical solutions, the most likely path forward is to accept that our wasm GC story may not be perfectly optimal, because we are dealing with the limitations of Rust as it exists.

An interesting aspect that we are stabilizing here is that we are making self-referential structs available from safe code. What makes this interesting is that in a Pin<&mut SelfReferentialGenerator>, we have a mutable reference (stored as a field in the Pin) pointing to the entire generator state, and we have a pointer inside that state pointing to another piece of the state. That inner pointer aliases with the mutable reference!

The mutable reference, to my knowledge, does not get used to actually access the part of the memory that the pointer-to-another-field points so. (In particular, there is no clone method or so that would read the pointer-to field using any other pointer than the self-referential one.) Still, this is getting way closer to having a mutable reference alias with something than anything else in the core ecosystem, in particular anything else that ships with rustc itself. The "line" we are riding here is getting very thin, and we have to be careful not to lose all these nice optimizations that we want to do based on mutable references.

There's probably little we can do about that at this point, in particular since Pin is already stable, but I feel it is worth pointing out that this will significantly complicate whatever the rules end up being for which aliasing is allowed and which is not. If you thought Stacked Borrows was complicated, prepare for things getting worse.

Cc https://github.com/rust-lang/unsafe-code-guidelines/issues/148

The mutable reference, to my knowledge, does not get used to actually access the part of the memory that the pointer-to-another-field points so.

People have talked about making all of these coroutine types implement Debug, it sounds like that conversation should also integrate unsafe code guidelines to be sure what its safe to debug print.

People have talked about making all of these coroutine types implement Debug, it sounds like that conversation should also integrate unsafe code guidelines to be sure what its safe to debug print.

Indeed. Such a Debug implementation, if it prints the self-referenced fields, would likely prohibit MIR-level reference-based optimizations inside generators.

Update regarding blockers:

The two high level blockers have both made great progress and might actually both be finished (?). More info from @cramertj @tmandry and @nikomatsakis about this would be great:

  • The multiple lifetimes issue should have been fixed by #61775
  • The size issue is more ambiguous; there will always be more optimizations to do, but I think the low hanging fruit of avoiding obvious exponential increase footguns has mostly been resolved?

This leaves documentation and testing as the major blockers on stabilizing this feature. @Centril has consistently expressed concerns that the feature is not well tested or polished enough; @Centril is there anywhere you have enumerated specific concerns that can be checked off to drive this feature to stabilization?

I'm not sure if anyone is driving documentation. Anyone who wants to focus on improving the in-tree documentation in the book, reference, etc would be doing a great service! Out of tree documentation like in the futures repo or areweasyncyet has a bit of extra time.

As of today we have 6 weeks until the beta is cut, so let's say we have 4 weeks (until August 1) to get these things done to be confident we won't slip 1.38.

The size issue is more ambiguous; there will always be more optimizations to do, but I think the low hanging fruit of avoiding obvious exponential increase footguns has mostly been resolved?

I believe so, and some others were also closed recently; but there are other blocking issues.

@Centril is there anywhere you have enumerated specific concerns that can be checked off to drive this feature to stabilization?

There's a dropbox paper with a list of things we wanted to be tested and there's https://github.com/rust-lang/rust/issues/62121. Other than that I'll try to re-review the areas I think are under-tested ASAP. That said, some areas are now pretty well tested.

Anyone who wants to focus on improving the in-tree documentation in the book, reference, etc would be doing a great service!

Indeed; I would be happy to review PRs to the reference. Also cc @ehuss.


I would also like to move async unsafe fn out of the MVP into its own feature gate because I think a) it has seen little use, b) it is not particularly well tested, c) it ostensibly behaves weirdly because the .await point is not where you write unsafe { ... } and this is understandable from "leaky implementation POV" but not so much from an effects POV, d) it has seen little discussion and was not included in the RFC nor this report, and e) we did this with const fn and it worked fine. (I can write up the feature gating PR)

I am fine with destabilizing async unsafe fn, though I am skeptical of us winding up with a different design than the present one. But it seems wise to give us time to figure that out!

I created https://github.com/rust-lang/rust/issues/62500 for moving async unsafe fn to a distinct feature gate and listed it as a blocker. We should probably create a proper tracking issue as well, I guess.

I'm strongly skeptical that we'll reach a different design for async unsafe fn and am surprised by the decision to not include it in the initial round of stabilization. I have written a number of async fns that are unsafe and will make them async fn really_this_function_is_unsafe() or something, I suppose. This seems like a regression in a basic expectation that Rust users have in terms of being able to define functions that require unsafe { ... } to call. Yet another feature gate will contribute to the impression that async/await is unfinished.

@cramertj seems like we ought to discuss! I created a Zulip topic for it, to try and keep this tracking issue from getting too overloaded.

Regarding future sizes, the cases that affect every await point are optimized. The last remaining issue that I know of is #59087, where any borrow of a future before awaiting can double the size allocated for that future. This is pretty unfortunate, but still quite a bit better than where we were before.

I have an idea of how to fix that issue, but unless this is way more common than I realize, it probably shouldn't be a blocker for a stable MVP.

That said, I still need to look at the impact of these optimizations on Fuchsia (that's been blocked for awhile but should clear up today or tomorrow). It's quite possible we'll discover more cases, and will need to decide if any of them should be blocking.

@cramertj (Reminder: I do use async/await and want it to stabilize ASAP) Your argument sounds like an argument for delaying stabilization of async/await, not for stabilizing async unsafe right now without proper experimentation and thought.

Especially as it wasn't included in the RFC, and will potentially trigger another “impl trait in argument position” shitstorm if it was forced out this way.

[Side note that doesn't really deserve discussion here: for “Yet another feature gate will contribute to the impression that async/await is unfinished”, I've found a bug every few hours of using async/await, spread by the few months legitimately needed by the rustc team to fix them, and it's the thing that makes me say it's unfinished. Last one was fixed a few days ago, and I'm really hoping I won't uncover another one when I try again to compile my code with a newer rustc, but…]

Your argument sounds like an argument for delaying stabilization of async/await, not for stabilizing async unsafe right now without proper experimentation and thought.

No, it isn't an argument for that. I believe that async unsafe is ready, and can't imagine any other design for it. I believe there are only negative consequences to not including it in this initial release. I do not believe that delaying async/await as a whole, nor async unsafe specifically, will produce a better result.

can't imagine any other design for it

An alternative design, though one that definitely requires complicated extensions: async unsafe fn is unsafe to .await, not to call(). The reasoning behind this being that _nothing unsafe can be done_ at the point where the async fn is called and creates the impl Future. All that step does is stuff data into a struct (in effect, all async fn are const to call). The actual point of unsafety is advancing the future with poll.

(imho, if the unsafe is immediate, unsafe async fn makes more sense, and if the unsafe is delayed, async unsafe fn makes more sense.)

Of course, if we never get a way to say e.g. unsafe Future where all methods of Future are unsafe to call, then "hoisting" the unsafe to the creation of the impl Future, and the contract of that unsafe being to use the resulting future in a safe way. But this can also be almost trivially done without unsafe async fn by just "desugaring" manually to an async block: unsafe fn os_stuff() -> impl Future { async { .. } }.

On top of that, though, there's a question of if there actually exists a way to have invariants that need to be held once polling starts that don't need to be held at creation. It's a common pattern in Rust that you use an unsafe constructor to a safe type (e.g. Vec::from_raw_parts). But key there is that after the construction, the type _cannot_ be misused; the unsafe scope is over. This scoping of unsafety is key to Rust's guarantees. If you introduce an unsafe async fn that crates a safe impl Future with requirements for how/when it's polled, then pass it to safe code, that safe code is suddenly inside your unsafety barrier. And this is _very_ likely to happen as soon as you use this future in any manner other than immediately awaiting it, as it'll likely go through _some_ external combinator.

I guess the TL;DR of this is that there are definitely corners of async unsafe fn that should be discussed properly before stabilizing it, especially with the direction of const Trait potentially being introduced (I have a draft blog post about generalizing this to a "weak 'effects' system" with any fn-modifying keyword). However, unsafe async fn might actually be clear enough about the "ordering"/"positioning" of the unsafe to stabilize.

I believe that an effects-based unsafe Future trait is not only out of reach of anything we know how to express in the language or the compiler today, but that it would ultimately be a worse design due to the additional effect-polymorphism that it would require combinators to have.

nothing unsafe can be done at the point where the async fn is called and creates the impl Future. All that step does is stuff data into a struct (in effect, all async fn are const to call). The actual point of unsafety is advancing the future with poll.

It's true that since an async fn can't run any user code prior to being .awaited, any undefined behavior would likely be delayed until .await was called. I think, though, that there's an important distinction between the point of UB and the point of unsafety. The actual point of unsafety is wherever an API author decides that a user needs to promise that a set of non-statically-verifiable invariants are met, even if the result of those invariants being violated wouldn't cause UB until later in some other safe code. One common example of this is an unsafe function to create a value that implements a trait with safe methods (exactly what this is). I've seen this used to ensure that e.g. Visitor-trait-implementing types whose implementations rely on unsafe invariants can be used soundly, by requiring unsafe to construct the type. Other examples include things like slice::from_raw_parts, which itself will not cause UB (type validity invariants aside), but accesses to the resulting slice will.

I don't believe that async unsafe fn represents a unique or interesting case here-- it follows a well-established pattern for performing unsafe behaviors behind a safe interface by requiring an unsafe constructor.

@cramertj The fact you're even having to argument for this (and I'm not suggesting I think the current solution is a bad one, or that I have a better idea) means, to me, that this debate should be at a place people who care about rust should follow: the RFC repository.

As a reminder, a quote from its readme:

You need to follow this process if [...] :

  • Any semantic or syntactic change to the language that is not a bugfix.
  • [... and also non-cited stuff]

I'm not saying any change to the current design will happen. Actually, thinking about it a few minutes makes me think it's likely the best design I could think of. But process is what allows us to avoid our beliefs from becoming a danger to Rust, and we're missing the wisdom of many people who follow the RFC repository but don't read every single issue by not following the process here.

Sometimes not following the process might make sense. Here I can see no urgency that would warrant ignoring the process just to avoid some 2 weeks of FCP delay.

So please let rust be honest with its community about the promises it gives in its own readme, and just keep that feature below a feature gate until there's at least an accepted RFC and hopefully some more use of it in the wild. Whether it's the whole async/await feature gate or just an unsafe-async feature gate I don't care, but just don't stabilize something that has (AFAIK) seen little usage beyond the async-wg and is barely known about in the overall community.

I am writing a first pass at reference material for the book. Along the way, I noticed that the async-await RFC says that the ? operator's behavior has not yet been determined. And yet it seems to work fine in an async block (playground). Should we move that to a separate feature gate? Or was that resolved at some point? I didn't see it in the stabilization report, but perhaps I missed it.

(I also asked this question on Zulip and would prefer responses there, as it's easier to manage for me.)

Yes, it was discussed and resolved along with the behavior of return, break, continue et. al. which all do "the only possible thing" and behave as they would inside of a closure.

let f = unsafe { || {...} }; is also safe to call and IIRC it's equivalent to moving theunsafe to inside of the closure.
Same thing for unsafe fn foo() -> impl Fn() { || {...} }.

This, to me, is precedent enough for "the unsafe thing happens after leaving the unsafe scope".

The same holds for other places. As previously pointed out, unsafe is not always where the potential UB would be. Example:

    let mut vec: Vec<u32> = Vec::new();

    unsafe { vec.set_len(100); }      // <- unsafe

    let val = vec.get(5).unwrap();     // <- UB
    println!("{}", val);

It just seems like a misunderstanding of unsafe to me - unsafe doesn't mark that "an unsafe operation occurs inside here"- it marks "I am garanteeing that I uphold the necessary invariants here." While you could be upholding the invariants at the await point, because it involves no variable parameters, its not a very obvious site for checking that you uphold the invariants. It makes much more sense, and is far more consistent with how all of our unsafe abstractions work, to guarantee you uphold invariants at the call site.

This is connected to why thinking of unsafe as an effect leads to inaccurate intuitions (as Ralf argued when that idea was first brought up last year). Unsafety is specifically, intentionally, not infectious. While you can write unsafe functions that call other unsafe functions and just forward their invariants up the call stack, this is not the normal way that unsafe is used at all, and its actually a syntactic marker used for defining contracts on values and manually checking that you uphold them.

So its not the case that every design decision needs a whole RFC, but we've been working on trying to provide more clarity and structure on how decisions are made. The list of major decision points in the opening of this issue is an example of that. Using the tools available to us, I'd like to take a stab at a structured consensus point around this issue of unsafe async fns, so this is a summary post with a poll.

async unsafe fn

async unsafe fns are async functions which can only be called inside an unsafe block. Inside their body is treated as an unsafe scope. The primary alternative design would be to make async unsafe fns unsafe to await, rather than to call. There are a number of solid reasons to prefer the design in which they are unsafe to call:

  1. It is consistent syntactically with the behavior of non-async unsafe fns, which are also unsafe to call.
  2. It is more consistent with how unsafe works in general. An unsafe function is an abstraction which depends on some invariants being upheld by its caller. That is, its not the case that its about marking "where the unsafe operation happens" but "where the invariant is guaranteed to be upheld." Its much more sensible to check that the invariants are upheld at the call site, where the arguments are actually specified, than at the await site, separate from when the arguments were selected and verified. This is very normal for unsafe functions in general, which often determine some state that other, safe functions expect to be correct
  3. It is more consistent with the desugaring notion of async fn signatures, where you can model the signature as equivalent to removing the async modifier and wrapping the return type in a future.
  4. The alternative is not viable to implement in the near or medium term (meaning several years). There is no way to create a future which is unsafe to poll in the currently designed Rust language. Some kind of "unsafe as an effect" would be a huge change that would have far reaching implications and need to deal with how it is backward compatible with unsafe as it exists today already (like, normal unsafe functions and blocks). Adding async unsafe fns does not significantly change that landscape, whereas async unsafe fns under the current interpretation of unsafe have real practical use cases in the near and medium term.

@rfcbot ask lang "Do we accept stabilizing async unsafe fn as an async fn which is unsafe to call?"

I have no idea how to make a poll with rfcbot but I've nominated it at least.

Team member @withoutboats has asked teams: T-lang, for consensus on:

"Do we accept stabilizing async unsafe fn as an async fn which is unsafe to call?"

  • [x] @Centril
  • [x] @cramertj
  • [x] @eddyb
  • [ ] @joshtriplett
  • [x] @nikomatsakis
  • [ ] @pnkfelix
  • [ ] @scottmcm
  • [x] @withoutboats

@withoutboats

I'd like to take a stab at a structured consensus point around this issue of unsafe async fns, so this is a summary post with a poll.

Thanks for the write-up. The discussion has me convinced that async unsafe fn as it works in nightly today behaves right. (Tho some tests should probably be added since it looked sparse.) Also, could you please amend the report at the top with parts of your report + a description of how async unsafe fn behaves?

It is more consistent with how unsafe works in general. An unsafe function is an abstraction which depends on some invariants being upheld by its caller. That is, its not the case that its about marking "where the unsafe operation happens" but "where the invariant is guaranteed to be upheld." Its much more sensible to check that the invariants are upheld at the call site, where the arguments are actually specified, than at the await site, separate from when the arguments were selected and verified. This is very normal for unsafe functions in general, which often determine some state that other, safe functions expect to be correct

As someone not paying too close attention, I would agree and think the solution here is good documentation.

I might be off the mark here, but given that

  • futures are combinatorical by nature, it's fundamental that they are composable.
  • await points inside a future implementation are generally an invisible implementation detail.
  • the future is very distant from the execution context, with the actual user maybe in-between instead of at the root.

it seems to me that invariants depending on specific awaiting usage/behavior is somewhere between a bad idea and impossible to rule safe.

If there are cases where the awaited output value is what is involved in upholding the invariants, I assume the future could simply have an output that is a wrapper requiring unsafe access, like

struct UnsafeOutput<T>(T);
impl<T> UnsafeOutput<T> {
    unsafe fn unwrap(self) -> T { self.0 }
}

Given that the unsafeness is before the asyncness in this "early unsafe", I'd be much happier with the modifier order being unsafe async fn than async unsafe fn, because unsafe (async fn) maps much more obviously onto that behavior than async (unsafe fn).

I'll happily accept either, but I strongly feel that the wrapping order exposed here has the unsafe on the outside, and the order of the modifiers can help make this clear. (unsafe is the modifier to async fn, not async the modifier to unsafe fn.)

I'll happily accept either, but I strongly feel that the wrapping order exposed here has the unsafe on the outside, and the order of the modifiers can help make this clear. (unsafe is the modifier to async fn, not async the modifier to unsafe fn.)

I was with you until your last parenthesized point. @withoutboats' writeup makes it pretty clear for me that, if the unsafety is dealt with at the call site, what you actually have is an unsafe fn (that happens to be called in an async context).

I'd say we paint the bikeshed async unsafe fn.

I think that async unsafe fn makes more sense, but I also think that we should grammatically accept any order among async, unsafe, and const. But async unsafe fn makes more sense to me with the notion that you strip the async and modify the return type to "desugar" it.

The alternative is not viable to implement in the near or medium term (meaning several years). There is no way to create a future which is unsafe to poll in the currently designed Rust language.

FWIW I ran into a similar problem that I mentioned in RFC2585 when it comes to closures inside unsafe fn and the function traits. I didn't expect unsafe async fn to return a Future with a safe poll method, but instead to return an UnsafeFuture with an unsafe poll method. (*) We could then make .await also work on UnsafeFutures when it is used inside of unsafe { } blocks, but not otherwise.

These two future traits would be a huge change with respect to what we have today, and they would probably introduce a lot of composability issues. So the ship for exploring alternatives has probably sailed. Particularly since this would be different to how the Fn traits work today (e.g. we don't have a UnsafeFn trait or similar, and my issue in RFC2585 was that creating a closure inside an unsafe fn returns a closure that impls Fn(), that is, that is safe to call, even though this closure can call unsafe functions.

Creating the "unsafe" Future or closure is not the problem, the problem is calling them without proving that doing so is safe, particularly when their types do not say that this must be done.

(*) We can provide a blanket impl of UnsafeFuture for all Futures, and we can also provide UnsafeFuture an unsafe method to "unwrap" itself as a Future that is safe to poll.

Here's my two cents:

  • @cramertj's explanation (https://github.com/rust-lang/rust/issues/62149#issuecomment-510166207) convinces me that unsafe async functions are the right design.
  • I very much prefer a fixed ordering of the keywords unsafe and async
  • I slightly prefer the ordering unsafe async fn because the ordering seems more logical. Similar to "a fast electric car" vs "an electric fast car". Mainly because an async fn desugars to an fn. So, it makes sense that the two keywords are next to each other.

I think let f = unsafe { || { ... } } should make f safe, an UnsafeFn trait should never be introduced, and a priori .awaiting and async unsafe fn should be safe. Any UnsafeFuture needs strong justification!

All this follows because unsafe should be explicit, and Rust should nudge you back into safe land. Also by this token, f's ... should _not_ be an unsafe block, https://github.com/rust-lang/rfcs/pull/2585 should be adopted, and an async unsafe fn should have a safe body.

I think this last point might prove rather crucial. It's possible that every async unsafe fn will employ an unsafe block, but similarly most would benefit from some safety analysis, and many sound complex enough to mistakes easy.

We should never bypass the borrow checker when capturing for closures in particular.

So my comment here: https://github.com/rust-lang/rust/issues/62149#issuecomment-511116357 is a very bad idea.

An UnsafeFuture trait would require the caller to write unsafe { } to poll a future, yet the caller has no idea of which obligations must be proven there, e.g., if you get a Box<dyn UnsafeFuture> is unsafe { future.poll() } safe ? For all futures ? You can't know. So this would be completely useless as @rpjohnst pointed out on discord for a similar UnsafeFn trait.

Requiring Future's to always be safe to pol makes sense, and the process of constructing a future that must be safe to poll can be unsafe; I suppose that's what async unsafe fn is. But in that case, the fn item can document what needs to be upheld so that the returned future is safe to poll.

@rfcbot implementation-work-blocking-stabilization

There are still 2 known implementation blockers to my knowledge (https://github.com/rust-lang/rust/issues/61949, https://github.com/rust-lang/rust/issues/62517) and it would still be good to add some tests. I'm resolving my concern to make rfcbot not be our blocker time-wise and then we'll actually block on the fixes instead.

@rfcbot resolve implementation-work-blocking-stabilization

:bell: This is now entering its final comment period, as per the review above. :bell:

The final comment period, with a disposition to merge, as per the review above, is now complete.

As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed.

The RFC will be merged soon.

An interesting aspect that we are stabilizing here is that we are making self-referential structs available from safe code. What makes this interesting is that in a Pin<&mut SelfReferentialGenerator>, we have a mutable reference (stored as a field in the Pin) pointing to the entire generator state, and we have a pointer inside that state pointing to another piece of the state. That inner pointer aliases with the mutable reference!

As a follow-up to this, @comex actually managed to write some (safe) async Rust code that violates LLVM's noalias annotations the way we currently emit them. However, it seems due to the use of TLS, there are currently no miscompilations.

Was this page helpful?
0 / 5 - 0 ratings