Tracking issue for rust-lang/rfcs#911.
This issue has been closed in favor of more targeted issues:
usize
casts: https://github.com/rust-lang/rust/issues/51910&mut T
references and borrows: https://github.com/rust-lang/rust/issues/57349Things to be done before stabilizing:
const unsafe fn
declaration order https://github.com/rust-lang/rust/issues/29107CTFE = https://en.wikipedia.org/wiki/Compile_time_function_execution
Is this closed by #25609?
@Munksgaard That just adds support to the compiler AFAIK. There's a lot of functions in the stdlib that need to be changed to const fn
and tested for breakage. I don't know what the progress is on that.
I'm hoping this to be implemented on std::ptr::null()
and null_mut()
so that we can use them to initialize static mut *MyTypeWithDrop
without resorting to 0usize as *mut _
EDIT: Removed since it was out of subject
To be clear, the question here is not primarily about the usefulness of the feature but rather regarding the best way to formulate it (or the best framework to formulate it in). See the RFC discussion.
This is now the tracking issue for eventual stabilization.
https://github.com/rust-lang/rust/issues/29107 has been closed.
I disagree that "Integration with patterns", or any changes to the standard library should block this. This is very useful even without those changes, and those changes can be done later. In particular, I would like to start using const fn
in my own code soon.
Accordingly, could the stabilization status of this be re-evaluated?
I don't doubt that const fn
even in its current limited form would be useful functionality to have, but what I would really like, ideally before going further along this path, would be for those in favor of "the const fn
approach" to think about and articulate their preferred endgame. If we just keep on incrementally adding useful-seeming functionality in the most obvious way, it seems very likely to me that we'll eventually end up copying more or less the entirety of C++'s constexpr
design. Is that something we are comfortable with? Even if we say yes, I would much rather that we choose that path in a clear-eyed way, instead of backing into it with small steps over time, as the path of least resistance, until it has become inevitable.
(Given that the semantics of safe Rust code should be fully definable, it seems likely that eventually at least every function which doesn't (transitively) depend on unsafe
should be able to be marked as const
. And given that unsafe
is supposed to be an implementation detail, I bet people will push for somehow loosening that restriction as well. I would much rather we looked abroad and tried to find a more cohesive, capable, and well-integrated story for staging and type-level computation.)
@glaebhoerl
I don't doubt that const fn even in its current limited form would be useful functionality to have, but what I would really like, ideally before going further along this path, would be for those in favor of "the const fn approach" to think about and articulate their preferred endgame...it seems very likely to me that we'll eventually end up copying more or less the entirety of C++'s constexpr design.
What I would personally like, even more than that, is that we have a fairly clear view on how we're going to implement it, and what portion of the language we are going to cover. That said, this is very closely related to support for associated constants or generics over integers in my mind.
@eddyb and I did some sketching recently on a scheme which could enable constant evaluation of a very broad swath of code. Basically lowering all constants to MIR and intrepreting it (in some cases,
abstract interpreting, if there are generics you cannot yet evaluate, which is where things get most interesting to me).
However, while it seems like it would be fairly easy to support a very large fraction of the "builtin language", real code in practice hits up against the need to do memory allocation very quickly. In other words, you want to use Vec
or some other container. And that's where this whole interpreting scheme starts to get more complicated to my mind.
That said, @glaebhoerl, I'd also love to hear you articulate your preferred alternative endgame. I think you sketched out some such thoughts in the const fn
RFC, but I think it'd be good to hear it again, and in this context. :)
The problem with allocation is having it escape into run-time.
If we can somehow disallow crossing that compile-time/run-time barrier, then I believe we could have a working liballoc
with const fn
.
It would be no harder to manage those kinds of allocations than would be to deal with byte-addressable values on an interpreted stack.
Alternatively, we could generate runtime code to allocate and fill in the values every time that barrier has to be passed, although I'm not sure what kind of usecases that has.
Keep in mind that even with full-fledged constexpr
-like evaluation, const fn
_would still_ be pure: running it twice on 'static
data would result in the exact same result and no side-effects.
@nikomatsakis If I had one I would have mentioned it. :) I mainly just see known unknowns. The whole thing with const
s as part of the generics system was of course part of what I understood as being the C++ design. As far as having associated const
s and const
generic parameters, considering that we already have fixed-size arrays with const
s as part of their type and would like to abstract over them, I would be surprised if there were a much better -- as opposed to merely more general -- way of doing it. The const fn
part of things feels more separable and variable. It's easy to imagine taking things further and having things like const impl
s and const Trait
bounds in generics, but I'm _sure_ there is prior art for this sort of general thing which has already figured things out and we should try to find it.
Of the main use cases for the Rust language, the ones that primarily need low-level control, like kernels, seem reasonably well-served already, but another area where Rust could have lots of potential is things that primarily need high performance, and in that space powerful support (in some form) for staged computation (which const fn
is already a very limited instance of) seems like it could be a game-changer. (Just in the last few weeks I came across two separate tweets by people who decided to switch from Rust to a language with better staging capabilities.) I'm not sure if any of the existing solutions in languages "close to us" -- C++'s constexpr
, D's ad-hoc CTFE, our procedural macros -- really feel inspiring and powerful/complete enough for this sort of thing. (Procedural macros seem like a good thing to have, but more for abstraction and DSLs, not as much for performance-oriented code generation.)
As for what _would_ be inspiring and good enough... I haven't seen it yet, and I'm not familiar enough with the whole space to know, precisely, where to look. Of course, per the above, we might want to at least glance at Julia and Terra, even if they seem like quite different languages from Rust in many ways. I know Oleg Kiselyov has done a lot of interesting work in this area. Tiark Rompf's work on Lancet and Lightweight Modular Staging for Scala seems definitely worth looking at. I recall seeing a presentation by @kmcallister at some point about what a dependently typed Rust might look like (which might at least be more general than sticking const
everywhere), and I also recall seeing something from Oleg to the effect that types themselves are a form of staging (which feels natural considering the phase separation between compile- and runtime is a lot like stages)... lots of exciting potential connections in many different directions, which is why it'd feel like a missed opportunity if we were to just commit to the first solution which occurs to us. :)
(This was just a braindump and I've almost surely imperfectly characterized many things.)
However, while it seems like it would be fairly easy to support a very large fraction of the "builtin language", real code in practice hits up against the need to do memory allocation very quickly. In other words, you want to use Vec or some other container. And that's where this whole interpreting scheme starts to get more complicated to my mind.
I disagree with that characterization of "real code in practice". I think there is big interest in Rust because it helps reduce the need for heap memory allocation. My code, in particular, makes a concrete effort to avoid heap allocation whenever possible.
Being able to do more than that would be _nice_ but being able to construct static instances of non-trivial types with compiler-enforced invariants is essential. The C++ constexpr
approach is extremely limiting, but it is more than what I need for my use cases: I need to provide a function that can construct an instance of type T
with parameters x
, y
, and z
such that the function guarantees that x
, y
, and z
are valid (e.g., x < y && z > 0
), such that the result can be a static
variable, without the use of initialization code that runs at startup.
@briansmith FWIW another approach which has a chance of solving the same use cases would be if macros had privacy hygiene, which I believe (hope) we're planning to make them have.
@briansmith FWIW another approach which has a chance of solving the same use cases would be if macros had privacy hygiene, which I believe (hope) we're planning to make them have.
I guess if you use the procedural macros then you can evaluate x < y && z > 0
at compile-time. But, it seems like it would be many, many months before procedural macros could be used in stable Rust, if they ever are. const fn
is interesting because it can be enabled for stable Rust _now_, as far as I understand the state of things.
@glaebhoerl I wouldn't hold my breath for strict hygiene, it's quite possible we're going to have an escape mechanism (just like real LISPs), so you may not want it for any kind of safety purposes.
@glaebhoerl there is also https://anydsl.github.io/, which even uses
Rust-like syntax ;) they are basically targeting staged computation and in
particular partial evaluation.
On Sat, Jan 16, 2016 at 6:29 PM, Eduard-Mihai Burtescu <
[email protected]> wrote:
@glaebhoerl https://github.com/glaebhoerl I wouldn't hold my breath for
strict hygiene, it's quite possible we're going to have an escape mechanism
(just like real LISPs), so you may not want it for any kind of safety
purposes.—
Reply to this email directly or view it on GitHub
https://github.com/rust-lang/rust/issues/24111#issuecomment-172271960.
Given that the semantics of safe Rust code should be fully definable, it seems likely that eventually at least every function which doesn't (transitively) depend on
unsafe
should be able to be marked asconst
. And given thatunsafe
is supposed to be an implementation detail, I bet people will push for somehow loosening that restriction as well.
Just a thought: if we ever formally define Rust's memory model, then even unsafe
code could potentially be safely and sensibly evaluated at compile-time by interpreting it abstractly/symbolically -- that is, use of raw pointers wouldn't turn into direct memory accesses like at runtime, but rather something (just as an example for illustration) like a lookup into a hashmap of allocated addresses, together with their types and values, or similar, with every step checked for validity -- so that any execution whose behavior is undefined would be _strictly_ a compiler-reported error, instead of a security vulnerability in rustc
. (This might also be connected to the situation w.r.t. handling isize
and usize
at compile-time symbolically or in a platform-dependent way.)
I'm not sure where that leaves us with respect to const fn
. On the one hand, that would likely open up much more useful code to be available at compile time -- Box
, Vec
, Rc
, anything using unsafe
for performance optimization -- which is good. One arbitrary restriction fewer. On the other hand, the boundary for "what can possibly be a const fn
" would now essentially be moved outwards to _anything that doesn't involve the FFI_. So what we'd _actually_ be tracking in the type system, under the guise of const
ness, is what things (transitively) rely on the FFI and what things don't. And whether or not something uses the FFI is _still_ something that people rightfully consider to be an internal implementation detail, and this restriction (unlike unsafe
) _really_ doesn't seem feasible to lift. And under this scenario you'd probably have far more fn
s being eligible for const
ness than ones which wouldn't.
So you'd still have const
ness revolving around an arbitrary, implementation-exposing restriction, and you'd also end up having to write const
almost everywhere. That doesn't sound too appealing either...
that is, use of raw pointers wouldn't turn into direct memory accesses like at runtime, but rather something ... like a lookup into a hashmap of allocated addresses,
@glaebhoerl Well, that is pretty much the model I described and which @tsion's miri is implementing.
I think the FFI distinction is very important because of purity, which is _required_ for coherence.
You _couldn't_ even use GHC for Rust const fn
s because it has unsafePerformIO
.
I don't like the const
keyword too much myself which is why I am okay with const fn foo<T: Trait>
instead of const fn foo<T: const Trait>
(for requiring a const impl Trait for T
).
Just like Sized
, we probably have the wrong defaults, but I haven't seen any other proposals that can realistically work.
@eddyb I think you meant to link to https://internals.rust-lang.org/t/mir-constant-evaluation/3143/31 (comment 31, not 11).
@tsion Fixed, thanks!
Please ignore this if I am completely off point.
The problem I see with this RFC is that as a user, you have to mark as many function const fn
as possible because that will probably be the best practice. The same thing is happening currently in C++ with contexpr. I think this is just unnecessary verbosity.
D doesn't have const fn
but it allows any function to be called at compile time ( with some exceptions ).
for example
// Standalone example.
struct Point { x: i32, y: i32 }
impl Point {
fn new(x: i32, y: i32) -> Point {
Point { x: x, y: y }
}
fn add(self, other: Point) -> Point {
Point::new(self.x + other.x, self.y + other.y)
}
}
const ORIGIN: Point = Point::new(0, 0); // works because 0, 0 are both known at compile time
const ORIGIN2: Point = Point::new(0, 0); // ditto
const ANOTHER: Point = ORIGIN.add(ORIGIN2); // works because ORIGIN and ORIGIN2 are both const.
{
let x: i32 = 42;
let y: i32 = 24;
const SOME_POINT: Point = Point::new(x, y); // Error: x and y are not known at compile time
}
{
const x: i32 = 42;
const y: i32 = 24;
const SOME_POINT: Point = Point::new(x, y); // Works x and y are both known at compile time.
}
Note, I am not really a Rust user and I have only read the RFC a few minutes ago, so it is possible that I might have misunderstood something.
@MaikKlein there was a lot of discussion on CTFE in the RFC discussion
I don't see any recent comments explaining the blockers here, and the op isn't very illuminating. What's the status. How can we move this across the finish line?
This is used by Rocket: https://github.com/SergioBenitez/Rocket/issues/19#issuecomment-269052006
See https://github.com/rust-lang/rust/issues/29646#issuecomment-271759986. Also we need to reconsider our position on explicitness since miri pushes the limit to "global side-effects" (@solson and @nikomatsakis were just talking about this on IRC).
The problem I see with this RFC is that as a user, you have to mark as many function const fn as possible because that will probably be the best practice.
While we could make arbitrary functions callable, if those functions access C code or statics we won't be able to compute them. As a solution I suggest a lint that will warn about public functions that could be const fn.
I agree about the lint. It's similar to the existing built-in lints missing_docs
, missing_debug_implementations
, and missing_copy_implementations
.
There's sort of a problem with having the lint on by default, though... it would warn about functions you explicitly don't want to be const
, say, because you plan to later change the function such that it can't be const
and don't want to commit your interface to const
(removing const
is a breaking change).
I guess #[allow(missing_const)] fn foo() {}
might work in those cases?
@eddyb @nikomatsakis My "removing const
is a breaking change" point suggests we'll want to have the keyword after all, since it's a promise to downstream that the fn
will _remain_ const
until the next major version.
It's going to be a shame how much const
will need to be sprinkled through std
and other libraries, but I don't see how you can avoid it, unless it was only required on public-facing items, and that seems like a confusing rule.
unless it was only required on public-facing items, and that seems like a confusing rule.
I like this one... I don't think it would be confusing. Your public interface is protected since you can't make a function not-const that is called by a const fn
Technically it would be better to annotate functions as notconst
, because I expect there to be way more const fn
than the other way around.
notconst
would also be more consistent with Rust's design philosophy. (ie. "mut
, not const
")
unless it was only required on public-facing items, and that seems like a confusing rule.
I like this one... I don't think it would be confusing.
I am flip flopping on this idea. It has its benefits (only think about const fn
when making public interface decisions) but I thought of another way it could be confusing:
Your public interface is protected since you can't make a function not-const that is called by a
const fn
This is true, and unfortunately it would imply that when a library author marks a public function const
, then they are implicitly marking all functions transitively called by that function const
as well, and there's a chance they're unintentionally marking functions they don't want to, thus preventing them from re-writing those internal functions using non-const features in the future.
I expect there to be way more
const fn
than the other way around.
I thought this way for a while, but it will only be true for pure Rust library crates. It won't be possible to make FFI-based fns const (even if they're only transitively FFI-based, which is a lot of stuff), so the sheer amount of const fn
may not be quite as bad as you and I thought.
My current conclusion: Any non-explicit const fn
seems problematic. There might just not be a good way to avoid writing the keyword a lot.
Also, for the record, notconst
would be a breaking change.
@solson A very good point.
Keep in mind the keyword gets even hairier if you try to use it with trait methods. Restricting it to the trait definition isn't useful enough and annotating impls results in imperfect "const fn parametrim" rules.
I feel like this trade-off was pretty thoroughly discussed when we adopted const fn
in the first place. I think @solson's analysis is also correct. I guess the only thing that has changed is that perhaps the percentage of constable fns has grown larger, but I don't think by enough to change the fundamental tradeoff here. It is going to be annoying to gradually have to add const fn
into your public interfaces and so forth, but such is life.
@nikomatsakis What's troubling me is the combination of these two facts:
unsafe
code can be "dynamically non-const"Given that "global side-effects" is the main thing that prevents code from being const fn
, isn't this the "effect system" that Rust used to have and got removed?
Shouldn't we talk about "effect stability"? Seems similar to code assuming some library never panics IMO.
@eddyb absolutely const
is an effect system and yes it does come with all the downsides that made us want to avoid them as much as possible... It is plausible that if we are going to endure the pain of adding in an effect system, we may want to consider some syntax that we can scale to other sorts of effects. As an example, we're paying a similar price with unsafe
(also an effect), though I'm not sure that it makes sense to think about unifying those.
The fact that violations may occur dynamically seems like even more reason to make this opt-in though, no?
How about this:
In general, I think, const fn
s should only be used for constructors (new
) or where absolutely necessary.
However, sometimes you may want to use other methods in order to conveniently create a constant. I think we could solve this problem for many cases by making constness the default but only for the defining module. This way, dependents cannot assume constness unless explicitly guaranteed with const
, while still having the convenience to create constants with functions without making everything const.
@torkleyy You can do that already by having helpers which are not exported.
I don't see a strong argument that private helper functions shouldn't be implicitly const
, when possible. I think @solson was saying that making const
explicit, even for helper functions, forces the programmer to pause and consider whether they want to commit to that function being const
. But if programmers are already required to think about that for public functions, isn't that enough? Wouldn't it be worth it not to have to write const
everywhere?
On IRC @eddyb proposed splitting this feature gate so that we could stabilize calls to const fns ahead of figuring out details of their declaration and bodies. Does that sound like a good idea?
@durka That sounds great to me, as a Rust users who doesn’t know much about compiler internals.
Excuse my lack of understanding here, but what does it mean to stabilize the call to a const
function without stabilizing the declaration.
Are we saying that the compiler will somehow know what is and isn't constant through some means, but leave that part open for discussion/implementation for the time being?
How then can the calls be stabilized if the compiler might later change its mind on what is constant?
@nixpulvis Some const fn
s already exist in the standard library, for example UnsafeCell::new
. This proposal would make it allowed to call such functions in constant contexts, for example the initializer of a static
item.
@nixpulvis What I meant were calls to const fn
functions defined by unstable-using code (such as the standard library), from constant contexts, not regular functions defined in stable Rust code.
cc @rust-lang/lang on https://github.com/rust-lang/rust/issues/24111#issuecomment-310245900
While I’m all in favor of stabilizing calls to const fn
s first if that can happen faster, it’s not clear to me what’s blocking stabilizing all of the const fn
feature. What are the remaining concerns today? What would be a path to address them?
@SimonSapin It's more that we're not clear the design for declaring const fn
s today scales well nor are we sure on the interactions between them and traits and how much flexibility there should be.
I think I'm inclined to stabilize uses of const fn. This seems like an ergonomic and expressiveness win and I still can't imagine a better way to handle compile-time constant evaluation than just being able to "write normal code".
stabilize uses of const fn.
This also stabilizes some functions in the standard library as being const
, the library team should do some audit at least.
I have submitted a PR https://github.com/rust-lang/rust/issues/43017 to stabilize invocations, along with a list of functions to be audited per @petrochenkov.
I have a question/comment about how this could be used in certain trait/impl situations. Hypothetically, let's say we have a math library with a Zero
trait:
pub trait Zero {
fn zero () -> Self;
}
This trait does not require the zero
method to be const
, as this would prevent it from being impled by some BigInt
type backed by a Vec
. But for machine scalars and other simple types, it would be far more practical if the method were const
.
impl Zero for i32 {
const fn zero () -> i32 { 0 } // const
}
impl Zero for BigInt {
fn zero () -> BigInt { ... } // not const
}
The trait does not require that the method be const
, but it should still be allowed, as const
is adding a restriction to the implementation and not ignoring one. This prevents having a normal version and a const
version of the same function for some types. What I'm wondering is has this already been addressed?
Why would you want different implementations of the trait to behave differently? You can't use that in a generic context. You can just make a local impl on the scalar with a const fn.
@Daggerbot That is the only way I see forward for const fn
in traits - having the trait require that all impls are const fn
is far less common than having effectively "const impl
s".
@jethrogb You could, although it requires the constness be a property of the impl.
What I have in mind is that a generic const fn
with an, e.g. T: Zero
bound, will require the impl
of Zero
for the T
s it is called with to contain only const fn
methods, when the call comes from a constant context itself (e.g. another const fn
).
It's not perfect but no superior alternative has been put forward - IMO the closest to that would be "allow any calls and error deep from the call stack if anything not possible at compile-time is attempted", which isn't as bad as it may seem at a first impression - most of the concern over it has to do with backwards compatibility, i.e. marking a function const fn
ensures that the fact is recorded and performing operations not valid at compile-time would require making it not const fn
.
Wouldn't this solve the issue?
pub trait Zero {
fn zero() -> Self;
}
pub trait ConstZero: Zero {
const fn zero() -> Self;
}
impl<T: ConstZero> Zero for T {
fn zero() -> Self {
<Self as ConstZero>::zero()
}
}
The boilerplate could be decreased with macros.
Apart from the minor inconvenience of having two separate traits (Zero
and ConstZero
) which do almost exactly the same thing, I see one potential problem when using a blanket implementation:
// Blanket impl
impl<T: ConstZero> Zero for T {
fn zero () -> Self { T::const_zero() }
}
pub struct Vector2<T> {
pub x: T,
pub y: T,
}
impl<T: ConstZero> ConstZero for Vector2<T> {
const fn const_zero () -> Vector2<T> {
Vector2 { x: T::const_zero(), y: T::const_zero() }
}
}
// Error: This now conflicts with the blanket impl above because Vector2<T> implements ConstZero and therefore Zero.
impl<T: Zero> Zero for Vector2<T> {
fn zero () -> Vector2<T> {
Vector2 { x: T::zero(), y: T::zero() }
}
}
The error would go away if we removed the blanket impl. All in all, this is probably the easiest to implement in a compiler as it adds the least complexity to the language.
But if we could add const
to an implemented method where it is not required, we can avoid this duplication, although still not perfectly:
impl<T: Zero> Zero for Vector2<T> {
const fn zero () -> Vector2<T> {
Vector2 { x: T::zero(), y: T::zero() }
}
}
IIRC, C++ allows something like this when working with constexpr
. The downside here is that this const
would only be applicable if <T as Zero>::zero
is also const
. Should this be an error, or should the compiler ignore this const
when it is not applicable (like C++)?
Neither of these examples tackle the problem perfectly, but I can't really think of a better way.
Edit: @andersk's suggestion would make the first example possible without errors. This would probably be the best/simplest solution as far as compiler implementation goes.
@Daggerbot This sounds like a use case for the “lattice” rule proposed near the end of RFC 1210 (specialization). If you write
impl<T: ConstZero> Zero for T {…} // 1
impl<T: ConstZero> ConstZero for Vector2<T> {…} // 2
impl<T: Zero> Zero for Vector2<T> {…} // 3
impl<T: ConstZero> Zero for Vector2<T> {…} // 4
then although 1 overlaps with 3, their intersection is covered precisely by 4, so it would be allowed under the lattice rule.
See also http://smallcultfollowing.com/babysteps/blog/2016/09/24/intersection-impls/.
That is an incredibly complex system, which we want to avoid.
Yeah, lattice rule would be needed.
@eddyb what do you consider complex?
@Kixunil Duplicating almost every single trait in the standard library, instead of "simply" marking some impl
s as const fn
.
We're getting off track here. Currently the issue is about stabilizing uses of const fn
. Allowing const fn
trait methods or const impl Trait for Foo
are orthogonal to each other and to the accepted RFCs.
@oli-obk This is not the new RFC but the tracking issue for const fn
.
I just noticed, and edited my comment.
@eddyb yeah, but it's simpler for compiler (minus specialization, but we will probably want specialization anyway) and allows people to bound by ConstTrait
too.
Anyway, I'm not opposed to marking impls as const. I'm also imagining compiler auto-generating ConstTrait: Trait
.
@Kixunil It's not much simpler, especially if you can do it with specialization.
The compiler wouldn't have to auto-generate anything like ConstTrait: Trait
, nor does the trait system need to know about any of this, one just needs to recurse through the implementations (either a concrete impl
or a where
bound) that the trait system provides and check them.
I'm wondering if const fns should disallow accesses to UnsafeCell
. It's probably necessary to allow truly const behavior:
const fn dont_change_anything(&self) -> bool {
let old = self.cell.get();
self.cell.set(!old);
old
}
So far I've seen that set
is not const
. The question is whether this will stay forever. In other words: Can unsafe
code rely that when running the same const
function on immutable data will always return the same result today and in every future release of the language/library?
const fn
doesn’t mean immutable, it means can be called at compile-time.
I see. I'd much appreciate if I could somehow guarantee that a function always returns same thing when called multiple times without using unsafe
traits, if it's possible somehow.
@Kixunil you want https://github.com/rust-lang/rfcs/issues/1631
@jethrogb Thanks for the link!
I noticed that mem::size_of
is implemented as a const fn
on nightly. Would this be possible for mem::transmute
and others? Rust intrinsics operate internal to the compiler, and I'm not familiar enough to make the proper changes to allow this. Otherwise, I'd be happy to implement it.
Unfortunately operating on values is a little harder than just magically creating some. transmute
requires miri. A fist step towards getting miri into the compiler is already underway: #43628
So! Any interest in const-stabilizing *Cell::new
, mem::{size,align}_of
, ptr::null{,_mut}
, Atomic*::new
, Once::new
and {integer}::{min,max}_value
? Shall we have FCPs in here or create individual tracking issues?
Yes.
I’m not part of any team that has decision power on this, but my personal opinion is that all of these except mem::{size,align}_of
are trivial enough that they could be stabilized now without going through the motions of a rubber-stamp FCP.
As a user I would like to use mem::{size,align}_of
in const expressions as soon as possible, but I’ve read @nikomatsakis express concerns about them being insta-const-stable when they were made const fn
s. I don’t know if there are specific concerns or just general caution, but IIRC this is why per-function feature gates were added. I imagine the concerns for these two would be similar enough that they could share a FCP. I don’t know if @rustbot can handle separate FCPs in the same GitHub thread, so it’s probably better to open separate issues.
@durka can you open a single tracking issue for stabilizing the constness of all of those functions? I'll propose FCP once it's up.
To follow a lead in a discussion about const fns on alloc::Layout:
Can panic be allowed in a const fn
and treated as a compilation error? This is similar to what is done now with constant arithmetic expressions, isn't it?
Yes that is a super trivial feature once miri is merged
Is this the right place to request additional std
functions becoming const
? If so, Duration::
{new
,from_secs
,from_millis
} should all be safe to make const
.
@remexre The easiest way to make it happen is probably to make a PR and ask for libs team review there.
PR'd as https://github.com/rust-lang/rust/pull/47300. I also added const
to the unstable constructors while I was at it.
Any thoughts on allowing further std
functions to be declared const
? Specifically, mem::uninitialized
and mem::zeroed
? I believe both of these are suitable candidates for additional const
functions. The only drawback I can think of is the same drawback of mem::uninitialized
, where the creation of structs implementing Drop
are created and written over without a ptr::write
.
I can attach a PR as well if this sounds suitable.
What is the motivation for that? It seems like a useless footgun to allow making invalid bit patterns that then can't be overwritten (because they're in a const), but maybe I'm overlooking the obvious.
mem::uninitialized
is absolutely a footgun, one that shoots through your hands as well if aimed improperly. Seriously, I cannot overstate how incredibly dangerous the use of this function can be, despite its marking as unsafe
.
The motivation behind declaring these additional functions const
stems from the nature of these functions, as calling mem::uninitialized<Vec<u32>>
will return the same result every time, with no side-effects. Obviously if left uninitialized, this is a terrible thing to do. Hence, the unsafe
is still present.
But for a use-case, consider a global timer, one that tracks the start of some function. It's internal state will be determined at a later time, but we need a way to present it as a static global struct created on execution.
use std::time::Instant;
pub struct GlobalTimer {
time: UnsafeCell<Instant>
}
impl TimeManager {
pub const fn init() -> TimeManager {
TimeManager {
time: UnsafeCell::new(Instant::now())
}
}
}
This code doesn't compile, due to Instant::now()
not being a const
function. Replacing Instant::now()
with mem::uninitialized::<Instant>())
would fix this problem if mem::uninitialized
was a const fn
. Ideally, the developer will initialize this structure once the program starts execution. And while this code is considered un-idiomatic rust (global state is generally very bad), this is just one of many cases where global static structures are useful.
I think this post gives a good foundation for the future of Rust code being run at compile-time. Global, compile-time static structures are a feature with some important use-cases (OS's also come to mind) that rust is currently missing. Small steps can be made towards this goal by think slowly adding const
to library functions deemed suitable, such as mem::uninitialized
and mem::zeroed
, despite their unsafe
markings.
Edit: Forgot the const
in function signature of TimeManager::init()
Hmm, that code does compile so I am still missing the exact motivation here... if you could write code such as
const fn foo() -> Whatever {
unsafe {
let mut it = mem::uninitialized();
init_whatever(&mut it);
it
}
}
But const fns are currently so restricted that you can't even write that...
I appreciate the theoretical justification, but const
is not the same as pure
and I don't think we should do anything to encourage the use of these functions if it's not necessary for some compelling use case.
I think there are much lower hanging fruit that could be stabilized first. Without miri, the uninitialized and zeroed intrinsics make little sense anyway. I would like to see them someday though. We could even initially stabilize them and require that all constants must produce an initialized result, even if intermediate computations can be uninitialized.
That said, with unions and unsafe code you can emulate uninitialized or zeroed anyway, so there's not much point in keeping them non-const
With the help of union
s, the previous code now compiles. It's absolutely terrifying 😅.
All good points as well. These intrinsic functions are pretty low on the use-case list, but they are still suitable candidates for eventual const
-ness.
That is terrifyingly amazing.
So... why exactly are you advocating for constifying mem::uninitialized, as
opposed to, say, Instant::now? :)
The need to have constant initializers for structs with non-constant
interiors is real (see: Mutex). But I don't think making this malarkey
easier is the right way to get that!
On Thu, Jan 25, 2018 at 2:21 AM, Stephen Fleischman <
[email protected]> wrote:
With the help of unions, the previous code now compiles
https://play.rust-lang.org/?gist=be075cf12f63dee3b2e2b65a12a3c854&version=nightly.
It's absolutely terrifying 😅.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rust-lang/rust/issues/24111#issuecomment-360382201,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC3n-HyWD6MUEbfHkUUXonh9ORGPSRoks5tOCtegaJpZM4D66IA
.
Instant::now
cannot be const. What would that function return? The time of compilation?
Can someone summarize what needs to be done for stabilizing this? What decision needs to be reached? Whether to stabilize this at all?
Integration with patterns (e.g. https://gist.github.com/d0ff1de8b6fc15ef1bb6)
I've already commented on the gist, but given that const fn
currently cannot be matched against in a pattern, this shouldn't block stabilization, right? We could always allow it afterwards if it makes sense.
Instant::now cannot be const. What would that function return? The time of compilation?
But there might be Instant::zero()
or Instant::min_value()
which is const.
Can someone summarize what needs to be done for stabilizing this? What decision needs to be reached? Whether to stabilize this at all?
I think the only open question is whether our const fn checks are strict enough to not accidentally allow/stabilize something that we don't want inside const fn.
Can we do integration with patterns through rust-lang/rfcs#2272 ? Patterns are already painful as they currently are, let's not make them more painful.
I think the only open question is whether our const fn checks are strict enough to not accidentally allow/stabilize something that we don't want inside const fn.
Correct me if I'm wrong, but aren't those checks identical to the checks to what is currently allowed in the body of a const
rvalue? I was under the impression that const FOO: Type = { body };
and const FOO: Type = foo(); const fn foo() -> Type { body }
are identical in what they allow for any arbitrary body
@sgrif I think the concern is around arguments, which const fn
have but const
don't.
Also, it's not clear that in the long term we want to keep the const fn
system of today.
Are you suggesting const generics (both ways) instead? (e.g. <const T>
+ const C<T>
aka const C<const T>
?)
I would really like to have a try_const!
macro which will try to evaluate any expression at compile time, and panic if not possible. This macro will be able to call non-const fns (using miri?), so we don't have to wait until every function in std has been marked const fn. However as the name implies, it can fail at any time, so if a function is updated and now can't be const, it will stop compiling.
@Badel2 I understand why you'd want such a feature, but I suspect widespread use of it would be really bad for the crates ecosystem. Because this way your crate might end up depending on a function in another crate being compile time evaluable, and then the crate author changes something not affecting the function signature but preventing the function from being compile time evaluable.
If the function was marked const fn
in the first place, then the crate author would have spotted the issue directly when trying to compile the crate and you can rely on the anotation.
If only this worked on the playground... https://play.rust-lang.org/?gist=6c0a46ee8299e36202f959908e8189e6&version=stable
This is a non-portable (indeed, so non-portable that it works on my system but not on the playground - yet they're both linux) way of including the build time in the built program.
The portable way would be to allow SystemTime::now() in const evaluation.
(This is an argument for const/compile-time-eval of ANY function/expression, regardless of if it's const fn
or not.)
That sounds to me like an argument for forbidding absolute paths in include_bytes 😃
If you allow SystemTime::now in const fn, const FOO: [u8; SystemTime::now()] = [42; SystemTime::now()];
would randomly error depending on your system perf, scheduler and Jupiter's position.
Even worse:
const TIME: SystemTime = SystemTime::now();
Does not mean the value of TIME is the same at all use sites, especially across compilations with incremental and across crates.
And even crazier is that you can screw up foo.clone()
in very unsound ways, because you might end up selecting the clone impl from an array with length 3 but the return type might be an array of length 4.
So even if we allowed arbitrary functions to be called, we would never allow SystemTime::new() to successfully return, just like we would never allow true random number generators
@SoniEx2 I guess this is a bit offtopic here, but you can implement something like that already today using a cargo build.rs
file. See Build Scripts in the Cargo Book, specifically the section on the case study of code generation.
@oli-obk I think it's not completely the same issue because one is about versioning API safety while the other is about the build environment, however I do agree that they both can lead to ecosystem breakage if not applied with care.
Please do not allow getting the current time in a const fn
; we don't need to add more/easier ways to make builds non-reproducible.
We can't allow any kind of non-determinism (like random numbers, the current time, etc) into const fn
- allowing that leads to type system unsoundness since rustc assumes that constant expressions always evaluate to the same result given the same input. See here for a bit more explanation.
A future method for handling cases like in https://github.com/rust-lang/rust/issues/24111#issuecomment-376352844 would be to use a simple procedural macro which gets the current time and emits it as a plain number or string token. Procedural macros are more-or-less completely unrestricted code that can get the time by any of the usual portable ways non-const Rust code would use.
@rfcbot fcp merge
I propose we merge this, because it is a somewhat sane option, is not a breaking change, prevents accidental breaking changes (changing a function in a way that makes it not const evaluable while other crates use the function in const contexts) and the only really bad thing about it is that we have to throw const
before a bunch of function declarations.
@rfcbot fcp merge on behalf of @oli-obk - seems worth thinking about stabilisation and discussing the issues
Team member @nrc has proposed to merge this. The next step is review by the rest of the tagged teams:
Concerns:
Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!
See this document for info about what commands tagged team members can give me.
@rfcbot concern priority
We might want to punt on this until after the edition since I don't think we have the bandwidth to deal with any fallout.
@rfcbot concern everything-const
We end up in a bit of a C++ world where there is incentive to make every function you can const
.
A summary of short discussion with @oli-obk:
In the future, almost every function could be marked const
. For example, everything on Vec
could be const
. In that world, it might make sense to get rid of const
keyword altogether: almost everything can be const
, and one will have to go out of one's way to change a function from const to non-const, so backwards compatability hazards about inferred constness probably wouldn't be terribly high.
However, getting rid of const
today is not feasible. Today's miri can't interpret everything, and it is not really thoroughly tested in production.
It is actually backwards compatible to require const
today, and then deprecate this keyword and swtich to inferred constness in the future.
Putting 1, 2 and 3 together, it seems a nice option to stabilize const
keyword today, than expand the set of constant-evaluatable functions in future releses. After some time, we will have a thoroughly battle-tested hony badger constant evaluator, which can evaluate everything. At that point, we can switch to inferred const.
Wrt the fallout dangers: const fn has been wildly used on nightly, especially on embedded. Also, the const fn checker is the same checker as the one used for static initializers and constants (except for some static specific checks and function arguments).
The major disadvantage I see is that we're essentially advocating to spray const
liberally across crates (for now, see @matklad 's post for future ideas
@rfcbot concern parallel-const-traits
It feels like stabilizing this will immediately result in a bunch of crates making a parallel trait hierarchy with Const
at the front: ConstDefault
, ConstFrom
, ConstInto
, ConstClone
, ConstTryFrom
, ConstTryInto
, etc and asking for ConstIndex
and such. That's not terrible -- we certainly have that a bit with Try
today, though stabilizing TryFrom will help -- but I feel like it would be nice to at least have a sketch of the plan for solving it nicer. (Is that https://github.com/rust-lang/rfcs/pull/2237? I don't know).
(@nrc: It looks like the bot only registered one of your concerns)
Parallel-const-traits have the trivial solution in the hypothetical future const-all-the-things version. They'd just work.
In the const fn world you would not end up with trait duplication, as long as we don't allow const fn trait methods (which we don't atm), just because you can't. You could of course create associated constants (on nightly) which is kind of the situation the libstd was at a year ago, where we had a bunch of constants for initializing various types inside statics/constants, without exposing their private fields. But that's something that could already have been happening for a while and didn't.
To be clear, ConstDefault
is already possible today without const fn
, and the rest of those examples (ConstFrom
, ConstInto
, ConstClone
, ConstTryFrom
, ConstTryInto
) won't be possible even with this feature stabilized, since it doesn't add const trait methods as @oli-obk mentioned.
(ConstDefault
is possible by using an associated const rather than an associated const fn, but it's equivalent in power as far as I know.)
@scottmcm const fn
in trait definitions is not possible today (oh @solson already mentioned it).
@eddyb random idea: what if we made it possible to const impl
a trait instead of adding const fn
in trait definitions? (These two aren't mutually exclusive, too.)
@whitequark https://github.com/rust-lang/rfcs/pull/2237 covers that idea, through a combination of const impl
expanding to const fn
on each fn
in the impl
, and allowing an impl
with all const
methods to satisfy a T: const Trait
bound, without marking any of the methods const
in the trait definition itself.
@rfcbot concern design
We've historically punted on stabilizing any one specific const fn
system for several reasons:
trait
s that require const fn
methods, or trait impl
s that provide const fn
methods (see https://github.com/rust-lang/rfcs/pull/2237 for some ways to do that)T: Trait
bound to be const fn
without having separate traits, and preferably only when used at compile-time (e.g. Option::map
would work the same at runtime but require a const-callable closure in CTFE)const fn
everywhere (libcore
comes to mind)There are different design choices which would alleviate most or of all these problems (at the cost of introducing others), for example these are a couple of the ones that have come up:
const fn
) behave like macrosBecause this way your crate might end up depending on a function in another crate being compile time evaluable, and then the crate author changes something not affecting the function signature but preventing the function from being compile time evaluable.
@leoschwarz isn't this already a problem with auto traits? Maybe the solution to this is to integrate rust-semverver with cargo to detect this kind of unintended breakage.
That said, it's not clear to me what happens if miri has a evaluation time limit that you (as a library author) accidentally exceed, causing compilation failure downstream.
@nrc I think "everything-const" is true, but not an issue. Yes, we'll end up marking a huge swath of things const
.
Just want to point out that I'm not sure I want everything inferred to be
const. It's a decision about whether runtime or compile time is more
important. Sometimes I think the compiler does quite enough computation at
compile time!
On Wed, Mar 28, 2018 at 2:49 PM, Josh Triplett notifications@github.com
wrote:
@nrc https://github.com/nrc I think "everything-const" is true, but not
an issue. Yes, we'll end up marking a huge swath of things const.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rust-lang/rust/issues/24111#issuecomment-376914220,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC3ny9Wm9JK6p-fXf6gbaEgjFtBpMctks5ti6LigaJpZM4D66IA
.
evaluation time limit
that limit is gone soon.
Just want to point out that I'm not sure I want everything inferred to be const.
Oh no, we're not going to randomly compute things at compile-time. Just allow random things to be computed in the body of statics, constants, enum variant discriminants and array lengths
@rfcbot resolved parallel-const-traits
Thanks for the corrections, folks!
that limit is gone soon.
Awesome. In that case, auto-const-fn (in combination with some integration of rust-semverver or similar to give information about breakage) sounds awesome, though the "add logging and cause breakage" could be problematic. Though you can bump version number I guess, it's not like they're finite.
Logging and printing are "fine" side effects in my model of constants. We'd be able to figure out a solution for that if everyone agrees. We could even write to files (not really, but act as if we did and throw away everything).
I'm really concerned about silently throwing away side effects.
We can discuss that once we create an RFC around them. For now you just can't have "side effects" in constants. The topic is orthogonal to stabilizing const fn
I'm a bit worried about the "just do a semver warning" approach to inferred
constness. If a crate author who never thought about constness sees
"warning: the change you just made makes it impossible to call foo() in
const context, which was previously possible", will they just see that as a
non-sequitur and silence it? Clearly, people in this issue frequently think
about which functions can be const. And it would be nice if more people do
that (once const_fn is stable). But is out-of-the-blue warnings the right
way to encourage that?
On Thu, Mar 29, 2018 at 4:36 AM, Oliver Schneider notifications@github.com
wrote:
We can discuss that once we create an RFC around them. For now you just
can't have "side effects" in constants. The topic is orthogonal to
stabilizing const fn—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rust-lang/rust/issues/24111#issuecomment-377164275,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC3n3MtmvrDF42Iy0nhZ2q8xC-QGcvXks5tjJ0ggaJpZM4D66IA
.
I think explicit const fn can be annoying and clutter many APIs however I think the alternative of implicitly assuming has way too many issues to be practicable:
However I really see the biggest problem in not making it explicit being that someone can accidentally break lots of code with a single change without even being aware of it. This is especially of concern with the long dependency graphs common in the Rust ecosystem. If it requires an explicit change to the function signature one will be aware of this being a breaking change more easily.
Maybe such a feature could be implemented as a crate level config flag that can be added at the root of the crate, #![infer_const_fn]
or something like that, and stay opt-in forever. If the flag is added const fn would be inferred where possible in the crate and also reflected in the docs (and it would require that the called functions are const fn too), if a crate author adds this flag they sort of pledge to be cautious about versioning and maybe rust-semverver could even be forced.
What about doing it backwards?
Rather than having const fn, have side fn.
It's still explicit (you need to put side fn to call side fn, explicitly breaking compatibility), and removes clutter. (Some) intrinsics and anything with asm would be a side fn.
That's not backwards compatible, though I guess it can be added in an edition?
On March 30, 2018 2:43:06 AM GMT+08:00, "Soni L." notifications@github.com wrote:
What about doing it backwards?
Rather than having const fn, have side fn.
It's still explicit (you need to put side fn to call side fn,
explicitly breaking compatibility), and removes clutter. (Some)
intrinsics and anything with asm would be a side fn.--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/rust-lang/rust/issues/24111#issuecomment-377333542
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
I think the bigger problem is that it would be a real shock to beginners, since that's not what most programming languages do.
@whitequark I disagree with anything that just does that ("throw away side-effects"), I think @oli-obk was talking about a future extension but from the discussions I was in, I know the following
EDIT: just so the discussion doesn't derail, e.g.:
Throwing away side effects can lead to code behaving differently (say write and then read a file) if it's called const or non const.
We can (probably?) all assume @oli-obk misspoke regarding throwing away side-effects like that.
Maybe such a feature could be implemented as a crate level config flag that can be added at the root of the crate
That's a subset of the second example of past suggestions, from https://github.com/rust-lang/rust/issues/24111#issuecomment-376829588.
If we have a scoped "config flag", the user should be able to choose more fine-grained scopes IMO.
What about doing it backwards?
Rather than having const fn, have side fn.
It's still explicit (you need to put side fn to call side fn, explicitly breaking compatibility), and removes clutter. (Some) intrinsics and anything with asm would be a side fn.
In https://github.com/rust-lang/rust/issues/24111#issuecomment-376829588 I tried to point out that entire libraries could be "all const fn
" or "all side fn
".
If it wasn't on function declarations, but rather scoped, it could maybe work in a future edition.
However, without "infer from body" semantics you have to design the trait interactions even for the opt-in side fn
, so you're not gaining anything and you're introducing potentially massive friction.
Section 3.3 of Kenton Varda's "Singletons considered harmful" writeup seems relevant here (honestly, the whole thing is well worth reading).
What about debug logging?
In practice, everyone acknowledges that debug logging should be available to every piece of code. We make an exception for it. The exact theoretical basis for this exception, for those who care, can be provided in a few ways.
From a security standpoint, debug logging is a benign singleton. It cannot be used as a communication channel because it is write-only. And it is clearly impossible to cause any sort of damage by writing to a debug log, since debug logging is not a factor in the program's correctness. Even if a malicious module "spams" the log, messages from that module can easily be filtered out, since debug logs normally identify exactly what module produced each message (sometimes they even provide a stack trace). Therefore, there is no problem with providing it.
Analogous arguments can be made to show that debug logging does not harm readability, testability, or maintainability.
Another theoretical justification for debug logging says that the debug log function is really just a no-op that happens to be observed by the debugger. When no debugger is running, the function does nothing. Debugging in general obviously breaks the entire object-capability model, but it is also obviously a privileged operation.
My statement about "we can figure out a solution [for debugging]" was indeed referring to a potential future API, that is callable from consts, but has some form of printing. Randomly implementing platform specific print operations (just so we can make existing code with print/debug statements be const) is not something that a const evaluator should do. This would be purely opt in, explicitly not having different observable behaviour (e.g. warnings in const eval and command line/file output at runtime). The exact semantics are left to future RFCs and should be considered entirely orthogonal to const fn in general
Are there any significant disadvantages to const impl Trait
and T: const Trait
?
Other than even more spraying around of const
, only some trait methods might be const, which would require more fine grained control. I don't know of a neat syntax to specify that though. Maybe where <T as Trait>::some_method is const
(is
could be a contextual keyword).
Again, that is orthogonal to const fn.
[u8; SizeOf<T>::Output]
If const and side fns are separate, we have to take actual design considerations into account. Easiest way to make them separate is to make const fns an extension to something we have today - the turing-complete type system.
Alternatively, make const fns really const: anything const fn should be evaluated as if every parameter was a const generic.
This makes them a lot easier to reason about, because I can't personally reason about const fns as they currently stand. I can reason about turing-complete types, macros, normal fns, etc, but I find it impossible to reason about const fn, since even minor details change their meaning completely.
since even minor details change their meaning completely.
Could you elaborate? Do you mean extensions like const fn
pointers, const
trait bounds, ...? Because I don't see any minor details in the bare const fn
proposal.
Alternatively, make const fns really const: anything const fn should be evaluated as if every parameter was a const generic.
That's what we're doing at compile-time. Just that at runtime the function is used like any other function.
The problem is that any minor detail can turn a const eval into a runtime eval. This may not seem like a huge deal, at first, but it can be.
Let's say the function call is really long because it's all const fns? And you wanna split it into multiple lines.
So you add some let
s.
Now your program takes 20x longer to run.
@SoniEx2 The size of arrays ($N
in [u8; $N]
) is always evaluated at compile-time. If that expression is non-const
, compilation will fail. Conversely, let x = foo()
will call foo
at runtime whether or not it is a const fn
(modulo the optimizer’s inlining and constant propagation, but that’s entirely separate from const fn
). If you want to name the result of evaluating some expression at compile-time you need a const
item.
Now your program takes 20x longer to run.
That's not at all how const fn works!
If you declare a function const fn
and you add a let
binding inside it, your code stops compiling.
If you remove const
from a const fn
, it's a breaking change and will break all uses of that function inside e.g. const
, static
or array lengths. Your code that was runtime code and ran a const fn
, would never ever run at compile-time. It's just a normal runtime function call, so it doesn't get slower.
Edit: @SimonSapin beat me to it :D
Const fn get evaluated at compile-time if possible.
That is,
const fn random() -> i32 {
4
}
fn thing() -> i32 {
let i = random(); // the RHS of this binding is evaluated at compile-time, there is no call to random at runtime.
}
Now let's say you have a const fn that takes arguments. This would get evaluated at compile-time:
fn thing() {
let x = const_fn_with_1_arg(const_fn_returns_value());
}
This would cause const_fn_with_1_arg
to be evaluated at runtime:
fn thing() {
let x = const_fn_returns_value();
let y = const_fn_with_1_arg(x); // suddenly your program takes 20x longer to run, and compiles 20x faster.
}
@eddyb I wonder if the design concern can be resolved by the observation that the "minimal const fn" is forward compatible with all potential future extensions? That is, my understanding is that we want to stabilize
marking free functions and inherent methods as const, enabling them to be called in constants contexts, with constant arguments.
This seems to be trivially fully forward compatible with any "const effects for traits" designs. It is also compatible with "inferred const" design, because we can make const
optional later.
Are there any alternative future designs which are incompatible with the current "minimal const fn" proposal?
@nrc
Note that rfcbot didn't register your everything-const
concern (one concern per comment!) However, it seems to be a subset of the design concern, which is addressed by my previous comment (TL;DR: current minimal proposal is fully compatible with everything, we could make const
keyword optional in the future).
As for the priority/fallout concern, I'd like to document what we've discussed at all hands, and what we haven't documented already:
const fn foo(x: i32) -> i32 { body }
is a relatively minor addition over const FOO: i32 = body;
, so the risk of fallout is small. That is, most of the code which actually implements const fn
is already working hard in the stable compiler (disclaimer: this is something I've heard from @oli-obk, I might have heared wrong).
embedded working group wants const fn badly :)
Additionally, note that not stabilizing const fn
leads to proliferation of sub optimal API in the libraries, because they have to use tricks like ATOMIC_USIZE_INIT
to work around lack of const fns.
let i = random(); // the RHS of this binding is evaluated at compile-time, there is no call to random at runtime.
No, that is not happening at all. It could be happening (and llvm probably is doing this), but you cannot expect any compiler optimizations that depend on heuristics to actually occur. If you want something to be computed at compile-time, stick it in a const
, and you get that guarantee.
so const fn only get evaluated in a const, and this is basically useless otherwise?
why not have const and non-const fn strictly separate then?
see the semantics are a mess because they intentionally mix up compile-time and runtime stuff.
so const fn only get evaluated in a const, and this is basically useless otherwise?
They are not useless, they are executed at runtime like any other function otherwise. This means that you don't have to use different "sublanguages" of Rust depending on whether you are in a const evaluation or whether you are not.
why not have const and non-const fn strictly separate then?
The entire motivation for const fn
is to not have this separation. Otherwise we'd need to duplicate all kinds of functions: AtomicUsize::new()
+ AtomicUsize::const_new()
, even though both bodies are identical.
Do you really want to have to write 90% of libcore
twice, once for const eval and once for runtime? The same probably goes for a lot of other crates.
I was thinking AtomicUsize::Of<value>
. And yes, I'd rather have to write everything twice than to have unknown guarantees. (Plus this wouldn't behave differently based on whether something is being const eval'd or not.)
Can you declare consts in const fn to guarantee const evaluation (for recursive const fn)? Or do you need to go through const generics? Etc.
@SoniEx2 as an example of how your example should be written to take advantage of const fn
and turn into a compile time error if either function becomes non-const
:
fn thing() {
const x: u32 = const_fn_returns_value();
const y: u32 = const_fn_with_1_arg(x);
}
(full running example on playground)
Slightly less ergonomic because there is no type inference, but who knows, that may change in the future.
than to have unknown guarantees.
would you be so kind and give some examples of where you think something is unclear?
Can you declare consts in const fn to guarantee const evaluation (for recursive const fn)? Or do you need to go through const generics? Etc.
The point of const fn
is not to magically evaluate things at compile time. It's to be able to evaluate things at compile time.
Magically evaluating things at compile-time is already happening since rustc was based on llvm. So... exactly when it stopped being implemented in ocaml. I don't think anyone wants to remove constant propagation from rustc.
const fn
does not influence constant propagation in any way. If you had a function that accidentally could be const propagated, and llvm did so, and you changed that function in a way that it's not const propagatable anymore, llvm would stop doing so. This is completely independent of attaching const
to a function. To llvm there's no difference between a const fn
and a fn
.
At the same time rustc doesn't change it's behaviour at all when you attach const
to a fn
(assuming the function is a valid const fn and thus still compiles after doing so). It only allows you to call this function in constants from now on.
I'm not thinking about LLVM, I'm thinking about rust. LLVM doesn't matter to me here.
@SoniEx2
Const fn get evaluated at compile-time if possible.
This is not correct. const fn
, when called in a particular context, WILL get evaluated at compile-time. If that's not possible, there's a hard error. In all other contexts, the const
part does not matter at all.
Examples of such contexts that require const fn
are array lengths. You can write [i32; 15]
. You can also write [i32; 3+4]
because the compiler can compute the 7
. You cannot write [i32; read_something_from_network()]
, because how would that make any sense? With this proposal, you CAN write [i32; foo(15)]
if foo
is const fn
, which makes sure it is more like addition and less like accessing the network.
This is not at all about code that may or will run when the program runs. There is no "maybe evaluate at compile-time". There is just "have to evaluate at compile-time or abort compilation".
Please also read the RFC: https://github.com/rust-lang/rfcs/blob/master/text/0911-const-fn.md
Instead of having a const fn
annotation, what if it was an inferred property? It would not be explicit in the source code, but could be automatically labeled as such in the auto-generated documentation. This would allow eventual broadening of what is considered const
, without library authors needing to change their code. At first, the inference could be limited to whatever const fn
s currently support (pure, deterministic functions without any let bindings?).
Under this approach, evaluation would occur at compile-time if the result is bound to a const
variable, and at run-time otherwise. This seems more desirable, since it gives the caller (rather than the callee) control over when the function is evaluated.
This has been discussed fairly thoroughly already. The downside of that approach is that it makes it easier to accidentally go the other direction- someone might be using a library function in a const
context, but then the library author might make it no longer const
without even realizing it.
Hmmm, yes that is a problem...
Edit: The only solution I can think of, without going all the way to const fn
, would be to have an opt-out annotation, so that the library author can reserve the right to break const
ness. I'm not sure that's any better than sprinkling const fn
everywhere, though. The only real benefit would be more rapid adoption of a broadening definition of const
.
Edit 2: I suppose that would break backwards compatibility, though, so it's a non-starter. Sorry for the side-track.
So... the discussion has died down. Let's summarize:
the rfcbot comment is https://github.com/rust-lang/rust/issues/24111#issuecomment-376649804
Current concerns
const
, which is annoyingI can't really talk about the priority thing + fallout concern, except that const fn
has baked in nightly a long long time
The other two points are closely related. @eddyb's design (https://github.com/rust-lang/rust/issues/24111#issuecomment-376829588 as I understood it) is to not have const fn
, but instead have a #[const]
attribute that you can slap onto stuff:
#[const]
mod foo {
pub fn square(i: i32) -> i32 { i * i }
}
#[const]
fn bar(s: &str) -> &str i{ s }
#[const]
fn boo() -> fn(u32) -> u32 { meh }
fn meh(u: u32) -> u32 { u + 1 }
and that recursively goes into whatever is marked with it, so any function inside a #[const]
module is a #[const] fn
. A function declared inside a #[const] fn
is also a #[const] fn
.
This lowers the number of annotations needed, as some crates will just slap a #![const]
in the lib.rs
and be done with it.
Issues I see with that design (but those issues also often exist in const fn
):
#[const]
module tree.#[const] fn
in argument/return type position required to be #[const] fn
?We do need to think about these things, so we don't design a system that will be incompatible with a future version where we want to be able to call functions through function pointers during a const eval.
Note that I did not propose a certain design, but merely listed some known plausible directions.
The original idea was a generalized "expose body of function" attribute, not limited to const
, but there are many possible variations, and some of them might even be good.
EDIT: (don't want to forget about this) @solson was showing me how Lean has attributes like @pattern
that automatically derive various things from the body of a function.
@oli-obk I think we shouldn't go with attributes, because unsafe
does not use an attribute.
Also, async
currently does not either. And if we introduce try fn
given try { .. }
blocks, then we have another thing that is not attribute based. I think we should try to stay as consistent as possible things that are effect-like wrt. using attributes or not. #[target_feature(..)]
does put a wrinkle in the overall consistency tho.
PS: You could use const mod { .. }
to get the same effect as #![const]
more or less. This could also apply to try mod
, async mod
, unsafe mod
.
I will always lean towards doing things with special types.
struct SizeOf<T>;
impl<T> SizeOf<T> {
const intrisic Result: usize;
}
it's easier to learn new types using existing syntax than learn new concepts with new syntax.
and we can later support type system at runtime.
fn sq(v: i32) -> i32 {
Square<v>::Result
}
types at compile time, const generics either at compile time or at runtime.
So... I'm suggesting to ignore the fact that there might be a possibly better design out there, because we have a design that is
and we can later support type system at runtime.
That's dependent typing, which is far off, while calling a const fn
at runtime works today just fine.
@oli-obk What about traits? I do not want to stabilize const fn
without some idea of what we're going to do for trait methods which are const fn
in only some of the trait's impl
s.
@eddyb seems that I should expedite the writing of the new const bounds and methods then. :)
@Centril My point is that the attribute proposal (whether using a keyword or not) would result in a much more different approach for dealing with trait methods, and we have to compare that.
The current const fn
approach might seem simple and extensible, but not when actually extended.
generic consts + const generics:
intrinsic const SizeOf<T>: usize;
const Pow<const V: usize>: usize = V*V;
@eddyb I have various solutions in mind that are fully compatible with the const fn design. I'll write it up.
Woah, I just saw it’s been two years that has started. Is there any foreseen date of stabilization? I have a crate that is almost available on stable because it waits for this extension to be stabilized. :)
@rfcbot concern runtime-pointer-addresses
On a different issue, the question of whether we want referential transparency from const fn
arose, and the problem of raw pointers' addresses being used as a non-determinism oracle appeared: https://github.com/rust-lang/rust/issues/49146#issuecomment-386727325. There is a solution outlined there, but it involves making some raw pointer operations unsafe
(not sure how many of them are even allowed today), before stabilization.
@eddyb Wouldn't E0018 apply to const fn
s as well?
The C Way is that object pointers are allowed to all be 0 unless relative (i.e. inside an object) and tracked at runtime somehow.
I'm not sure if rust supports C's aliasing rules.
@sgrif Many of the errors emitted about constants are going to go away sooner or later - miri doesn't care the type a value is seen as, an abstract location that is in an usize
value is still an abstract location (and casting it to a pointer gives you back the original pointer).
I just checked and for now, both casting pointers to integers, and comparison operators between pointer, are banned in constant contexts. However, this is just what we thought of, I'm still scared.
@eddyb Fair enough. However, I would expect that any concerns you have which hit const fn
already hit any const
blocks today.
@sgrif The difference is const
(even associated const
s that depend on generic type parameters) are fully evaluated at compile-time, under miri, while const fn
is a non-const
fn
for runtime calls.
So if we really want referential transparency, we need to make sure we do not allow (in safe code, at least) things that can cause runtime non-determinism even if they're fine under miri.
Which probably means getting the bits of float is also a problem, because e.g. NaN payloads.
Things you should consider doing in miri:
All pointers are 0 unless relative. E.g.:
#[repr(C)]
struct X {
a: usize,
b: u8,
}
let x = X { a: 1, b: 2 };
let y: usize = 3;
assert_eq!(&x as *const _ as usize, 0);
assert_eq!(&x.a as *const _ as usize, 0);
assert_eq!(&x.b as *const _ as usize, 8);
assert_eq!(&y as *const _ as usize, 0);
Then you track them at miri evaluation. Some things would be UB, like going from pointer to usize back to pointer, but those are easy to disallow (ban usize to pointer conversions. since you'd already be tracking pointers at runtime/evaluation time).
As for float bits, NaN normalization?
I'd think both of those would make the whole thing deterministic.
Which probably means getting the bits of float is also a problem, because e.g. NaN payloads.
I think that for full referential transparency, we'd have to make all float operations unsafe.
floating point determinism is hard
The most important point here is that LLVM's optimizer can and does change the order of float operations as well as ~performs~ fuses operations it has a combined opcode for. These changes do affect the outcome of the operation, even if the actual difference is slight. This affects referential transparency because miri executes the non-llvm-optimized mir while the target executes the llvm-optimized-and-possibly-reordered-and-therefore-having-different-semantics-than native code.
I would agree to a first stable const fn feature without floats for now until there is a descision on how important referential transparency is, but wouldn't want const fn to be slowed down or blocked by that discussion.
The most important point here is that LLVM's optimizer can and does change the order of float operations as well as performs fuses operations it has a combined opcode for. These changes do affect the outcome of the operation, even if the actual difference is slight.
Fusing operations (I assume you refer to mul-add) is not allowed/not done without fast-math flags precisely because it changes the rounding of results. LLVM is very careful to preserve the exact semantics of floating point operations when targeting IEEE compliant hardware, within the limits sets by the "default floating point environment".
Two things that LLVM doesn't try to preserve are NaN payloads and signallingness of NaNs, because both of those can't be observed in the default fp environment -- only by inspecting the floats' bits, which we would therefore need to prohibit. (And even if LLVM was more careful about those, hardware varies in its treatment of NaN payloads as well, so you can't pin this on LLVM.)
The other major case I know of where the compiler's decision can make a difference for floating point results is the location of spills and reloads in x87 (pre-SSE) code. And this is mostly an issue because x87 by default rounds to 80 bit for intermediate results and rounds to 32 or 64 bit on stores. Properly setting the rounding mode before each FPU instruction to achieve correctly rounded results is possible, but it's not really practical and so (I believe) LLVM doesn't support it.
My view is that we should go with full referential transparency because it jives with the overall Rust message of picking safety / correctness over convenience / completeness.
Referential transparency adds a lot of reasoning benefits such as enabling equational reasoning (up to bottoms, but fast and loose reasoning is morally correct).
However, there are of course drawbacks wrt. losing out on completeness wrt. CTFE. By this I mean that a referentially transparent const fn
mechanism would not be able to evaluate as much at compile time as a non-ref-transparent const fn
scheme could. From those that propose not sticking with referential transparency, I'd ask that they provide as much concrete use cases as they can against this proposition so that we may evaluate the trade-offs.
Okay, seems you are right about the LLVM point, it indeed seems to avoid computationally wrong operations unless you enable fast math mode.
However, there is still a bunch of state that floating point operations depend on like internal precision. Does the CTFE float evaluator know the internal precision value at runtime?
Also, during spilling of values to memory we convert the internal value to ieee754 format and thus change precision. This can affect the result as well, and the algorithm by with compilers perform spilling is not specified, is it?
@est31 Note that I have been assuming that we don't care if compile-time and runtime behavior differ, only that repeated calls (with frozen object graphs) are consistent and global side-effect-free.
So if we really want referential transparency, we need to make sure we do not allow (in safe code, at least) things that can cause runtime non-determinism even if they're fine under miri.
So the goal here is to guarantee that a const fn
is deterministic and side-effect free at run-time even if miri would error out about it during execution, at least if the const fn
is entire safe code? I never considered that to be important, TBH. It's a pretty strong requirement, and indeed we would at least have to make ptr-to-int and float-to-bits unsafe then, which will be hard to explain. OTOH, we should then also make raw pointer comparison unsafe and I'd be happy about that :D
What is the motivation for this? Seems like we're trying to re-introduce purity and an effect system, which are things Rust once had and lost, presumably because they did not carry their weight (EDIT: or because it just wasn't flexible enough to do all the different things people wanted to use it for, see https://mail.mozilla.org/pipermail/rust-dev/2013-April/003926.html).
ptr-to-int is safe if it converts from the start of an object, and you allow int-to-ptr to fail. it would be 100% deterministic. and every call would produce the same results.
ptr-to-int is safe, int-to-ptr is unsafe. ptr-to-int should be allowed "unexepcted" results in const evaluation.
and you should really use NaN canonicalization in an interpreter/VM. (an example of this is LuaJIT, which makes all NaN results be the canonical NaN, and every other NaN is a packed NaN)
ptr-to-int is safe if it converts from the start of an object, and you allow int-to-ptr to fail. it would be 100% deterministic. and every call would produce the same results.
How do you suggest we implement this at run-time? (&mut *Box::new(...)) as usize
is a non-deterministic function right now and should be all const fn
. So how do you suggest we make sure that it is "referentially transpent at run-time" in the sense of always returning the same value?
Box::new should return a tracked pointer in the VM.
The conversion would result in 0, at compile-time (i.e. in a const evaluation). In non-const evaluation, it would return whatever.
A tracked pointer works like this:
You allocate a pointer, and you assign it. When you assign it, the VM sets the pointer value to 0, but takes the memory address of the pointer, and attaches it to a lookup table. When you use the pointer, you go through the lookup table. When you get the value of the pointer (i.e. the bytes that make it up), you get 0. Assuming it's the base pointer of an object.
Box::new should return a tracked pointer in the VM.
We are talking about run-time behavior here. As in, what happens when this is executed in the binary. There is no VM.
miri can already handle all of this just fine and entirely deterministically.
Also, coming back to the concern about const fn
and runtime determinisim: If we want that (which I am not sure we do, still waiting for some to explain to me why we care :D ), we could just declare ptr-to-int and float-to-bits to be non-const operations. Is there any problem with that?
@RalfJung I linked https://github.com/rust-lang/rust/issues/49146#issuecomment-386727325 already in the concern comment, maybe that got lost under other messages - it inclues a description of a solution @Centril proposed (make operations unsafe
and whitelist a few "correct uses", e.g. offset_of
for ptr-to-int and NaN-normalized float-to-bits).
@eddyb sure, there are various solutions; what I am asking for is the motivation. I haven't found it over there either.
Also, quoting myself from IRC: I think we can even rightfully argue that CTFE is deterministic, even when ptr-to-int cast is allowed. It still takes non-CTFE-code (e.g. extracting bits of a ptr cast to an int) to actually observe any non-determinism.
const fn thing() -> usize {
(*Box::new(0)) as *const _ as usize
}
const X: usize = thing();
const Y: usize = thing();
is X == Y?
with my suggestion, X == Y.
@SoniEx2 that breaks foo as *const _ as usize as *const _
which is totally a noop right now
Wrt motivation:
I personally don't see an issue with runtime behaviour differing from compile time behaviour or even const fn being nondeterministic at runtime. Safety is not the issue here, so unsafe is totally the wrong keyword imo. This is just surprising behaviour that we're totally catching at compile time eval. You can already do this in normal functions, so no surprises there. Const fn is about marking existing functions as evaluable at compile time without affecting runtime. It's not about purity, at least that's the vibe I got when ppl talk about the "pure hell" (I wasn't around, so I might be misinterpreting)
@SoniEx2 Yes, X == Y
always.
However, if you have:
const fn oracle() -> bool { let x = 0; let y = &x as *const _ as usize; even(y) }
fn main() {
assert_eq!(oracle(), oracle());
}
Then main
may panic at runtime.
@RalfJung
or because it just wasn't flexible enough to do all the different things people wanted to use it for [..]
What were those things? It is hard to evaluate this without concretion.
I think we can even rightfully argue that CTFE is deterministic, even when ptr-to-int cast is allowed.
[...]
It still takes non-CTFE-code (e.g. extracting bits of a ptr cast to an int) to actually observe any non-determinism.
Yes, const fn
is still deterministic when executed at compile time, but I don't buy that it is deterministic at runtime because there will invariably be some non-const fn
(just fn
) code if the result of const fn
is to be useful for anything at runtime, and then that fn
code will observe side effects from the executed const fn
.
The whole point of separation between pure and non-pure code in Haskell is that you can make the decision "shall there be side effects" a local one where you don't have to think about possible side effects globally. That is, given:
reverse :: [a] -> [a]
reverse [] = []
reverse (x : xs) = reverse xs ++ [x]
putStrLn :: String -> IO ()
getLine :: IO String
main :: IO ()
main = do
line <- getLine
let revLine = reverse line
putStrLn revLine
You know that the result of revLine
in let revLine = reverse line
can only depend on the state passed into reverse
which is line
. This provides reasoning benefits and clean separation.
I wonder what the effect system back then looked like... I think we need one given const fn
, async fn
, etc. anyways to do it cleanly and get code reuse (something https://github.com/rust-lang/rfcs/pull/2237 but with some changes...) and parametricity seems like a good idea for that.
In the immortal words of Phil Wadler and Conor McBride:
Shall I be pure or impure?
—Philip Wadler [60]We say ‘Yes.’: purity is a choice to make locally
https://arxiv.org/pdf/1611.09259.pdf
@oli-obk
I do not envy the person that has to document this surprising behavior that the result of execution can differ for const fn
if executed at runtime or at compile time given the same arguments.
As a conservative option, I propose that we delay the decision on referential transparency / purity by stabilizing const fn
as referentially transparent, making it unsafe
(or non-const
) to violate it, but we don't actually guarantee referential transparency, so people can't assume it either.
At a later time, when we've gained experience, we can then make the decision.
Gaining experience includes people trying to make use of the non-determinism but failing, and then reporting it and telling us about their use cases.
Yes but that's not a const evaluation.
Converting int to ptr in const doesn't seem like a huge loss. Getting field offsets seems more important, and that's what I'm trying to preserve.
I wouldn't want to silently change behaviour when adding const
to a function @SoniEx2 and that is essentially what your suggestion would do.
@centril I agree, let's do the conservative thing now. so we don't make these unsafe, but unstable. this way the libstd can use it, but we can later decide on the details.
One problem we do have is that users can always brick any analysis we do by introducing unions and doing some fancy converting, so we'd have to make an such circumventing conversions UB so we're allowed to change it later.
it would still be the same at runtime, the const would only change things at const time, which... wouldn't be possible to even do without the const.
it's fine for const fns to be a sublanguage with different pointer aliasing rules.
@Centril @est31 I'm not sure what Chlorotrifluoroethylene has to do with anything (can we avoid using a bunch of acronyms without defining them please?)
@sgrif CTFE (compile-time function evaluation) has long been used for Rust (including in the years-old parts of this thread). It's one of those terms that should be defined somewhere central already.
@sgrif lol sorry most times I am the person who is wondering "which weird words are they using now?". I think I was talking to @eddyb in IRC when he mentioned "CTFE" and I had to ask him what it meant :p.
@eddyb ctrl+f + CTFE on this page doesn't lead to anything defining it. Either way, I'm mostly just going on the assumption that we don't want to force people to do a lot of digging to take part in discussions here. "You should already know what this means" is fairly exclusionary IMO.
@Centril @est31 thanks :heart:
So... any thoughts about
One problem we do have is that users can always brick any analysis we do by introducing unions and doing some fancy converting, so we'd have to make an such circumventing conversions UB so we're allowed to change it later.
Should union field accesses inside const fn thus be unstable, too?
ctrl+f + CTFE on this page doesn't lead to anything defining it.
Sorry, what I meant by that is, its usage in Rust's design & development predates this pre-1.0 issue.
There should be a central document for such terms, but sadly, the reference's glossary is really lacking.
HRTB and NLL are two examples of other acronyms which are newer but also not explained inline.
I don't want any discussions to be exclusionary, but I don't think it's fair to ask either @Centril or @est31 to define CTFE, since neither of them introduced the acronym in the first place.
There's a solution that sadly wouldn't necessarily work on GitHub, which I've seen in a certain subreddit, where a bot will create a top-level comment and keep it updated, with a list of expansions for all the commonly-used acronyms of that subreddit showing up in the discussion.
Also, I'm curious if I'm in a Google bubble, since the two wikipedia articles for CTFE ("Chlorotrifluoroethylene" and "Compile time function execution") are the first two results for me. Even then, the former starts with "For the compiler feature, see compile time function execution.".
(I put the CTFE wiki link at the top; now can we please get back to const fn
s? :P)
Could someone add CTFE to the rustc guide glossary (I'm on mobile)?
Also, I think we should stabilize the bare minimum and see what people run into a lot in practice, which send to be current approach with const if and match expressions.
@Centril
I do not envy the person that has to document this surprising behavior that the result of execution can differ for const fn if executed at runtime or at compile time given the same arguments.
The even
you wrote above would fail if executed at CTFE time because it inspects the bits of a pointer. (Though actually we can deterministically say that this is even because of alignment, and if the evenness test is done via bit operations, "full miri" would do that properly. But let's assume you are testing the entire least significant byte, not just the last bit.)
The case we are talking about here is a function that errors with an interpreter error at compile-time, but succeeds at run-time. The difference is in whether the function even completes execution. I don't think that's hard to explain.
I agree that if CTFE succeeds with a result, then the run-time version of the function should also be guaranteed to succeed with the same result. But that's a much weaker guarantee than what we are talking about here, is it not?
@oli-obk
I agree, let's do the conservative thing now. so we don't make these unsafe, but unstable. this way the libstd can use it, but we can later decide on the details.
I lost context, what is "these" here?
Right now, luckily, CTFE miri just outright refuses to do anything with pointer values -- arithmetic, comparison, everything errors. This is a check done at CTFE time based on the values actually used in the computation, it cannot be circumvented by unions and anyway the code that would be needed to do the arithmetic/comparison just doesn't exist. Hence I am pretty sure that we satisfy the guarantee I stated above.
I could imagine problems if we had CTFE return a pointer value, but how would a compile-time-computed pointer value even make any sense anywhere? I assume we already check whatever miri computes to not contain pointer values because we have to turn it into bits?
We could carefully add operations to CTFE miri, and in fact all we need for @eddyb's offset_of [1] is pointer subtraction. That code exists in "full miri", and it only succeeds if both pointers are inside the same allocation, which is sufficient to maintain the guarantee above. What would not work is the assert
that @eddyb added as a safeguard.
We could also allow bit operations on pointer values if the operations only affect the aligned part of the pointer, that's still deterministic and the code actually already exists in "full miri".
EDIT: [1] For reference, I am referring to his macro in this thread that we cannot link to because it was marked off-topic, so here's a copy:
macro_rules! offset_of {
($Struct:path, $field:ident) => ({
// Using a separate function to minimize unhygienic hazards
// (e.g. unsafety of #[repr(packed)] field borrows).
// Uncomment `const` when `const fn`s can juggle pointers.
/*const*/ fn offset() -> usize {
let u = $crate::mem::MaybeUninit::<$Struct>::uninit();
// Use pattern-matching to avoid accidentally going through Deref.
let &$Struct { $field: ref f, .. } = unsafe { &*u.as_ptr() };
let o = (f as *const _ as usize).wrapping_sub(&u as *const _ as usize);
// Triple check that we are within `u` still.
assert!((0..=$crate::mem::size_of_val(&u)).contains(&o));
o
}
offset()
})
}
EDIT2: Actually, he also posted it here.
"these" are float -> bits conversion and pointer -> usize conversions
I agree that if CTFE succeeds with a result, then the run-time version of the function should also be guaranteed to succeed with the same result. But that's a much weaker guarantee than what we are talking about here, is it not?
So a function called with arguments at run time is only guaranteed purity if it actually terminates if evaluated at compile time with the same arguments?
That would make our lives a million times easier, especially since I do not see a way to prevent the nondeterminism without either leaving loopholes or crippling const fn in breaking-change ways.
So a function called with arguments at run time is only guaranteed purity if it actually terminates if evaluated at compile time with the same arguments?
Yes, that's what I am proposing.
Thinking about it some more (and reading @oli-obk's reply that appeared while I was writing this), I get the feeling that you want is an additional guarantee along the lines of "a safe const fn
will not error at CTFE time (other than panics) when called with valid arguments". Some kind of "const safety" guarantee. Together with the guarantee I stated above about successful CTFE coinciding with run-time behavior, that would provide a guarantee that a safe const fn
will be deterministic at run-time, because it matches the successful CTFE execution.
I agree that's a harder guarantee to obtain. For better or worse, Rust has various safe operations that CTFE miri cannot guarantee to always execute successfully while maintaining determinism, like the variant of @Centril's oracle
that tests the least significant byte for being 0. From the perspective of CTFE in this setting, the "valid values of type usize
" constitute only values that are PrimVal::Bytes
, while a PrimVal::Ptr
should not be allowed [1]. It's like we have a slightly different type system in const context -- I am not proposing we change what miri does, I am proposing we change what the various Rust types "mean" when attached to a const fn
. Such a type system would guarantee that all the safe arithmetic and bit operations cannot go wrong in CTFE: For CTFE-valid inputs, integer subtraction can never fail in CTFE because both sides are PrimVal::Bytes
. Of course, ptr-to-int has to be an unsafe operation in this setting because its return value has type usize
but is not a CTFE-valid usize
.
If this is a guarantee we care about, it does not seem unreasonable to me to make the type system more strict when checking CTFE functions; after all, we want to use it to do more things (checking "const safety"). I think it would make a lot of sense, then, to declare ptr-to-int casts unsafe
in const
context, arguing that this is needed because the const
context makes additional guarantees.
Just like our normal "runtime safety" can be subverted by unsafe code, so can "const safety", and that's fine. I don't see any problems with unsafe code being able to still do ptr-to-int casts via unions -- that's unsafe code, after all. In this world, the proof obligations for unsafe code in a const context are stronger than the ones in a non-const context; if your function returns an integer you have to prove that this will always be a PrimVal::Bytes
and never be a PrimVal::Ptr
.
@eddyb's macro would need an unsafe block, but it would still be safe to use because it only makes use of these "const unsafe features" (ptr-to-usize and then substracting the result, or just using the pointer subtraction intrinsic directly) in a way that is guaranteed not to raise a CTFE error.
The cost of such a system would be that a safe higher-order const fn
must take a const fn
closure to be able to guarantee that executing the closure will not, itself, violate "const safety". Such is the price of actually obtaining proper guarantees about safe const fn
.
[1] I am fully ignoring floats here as I don't know much about where the non-determinism would arise. Can someone provide an example for floating point code that would behave differently at CTFE time than at run-time? Would it be sufficient to e.g. make miri error out if, when doing a floating point operation, one of the operands is a signalling NaN (to obtain the first guarantee, the one from my previous post), and to say that the CTFE type system does not allow signalling NaNs at type f32
/f64
(to obtain "const safety")?
What would not work is the assert that @eddyb added as a safeguard.
Sure, but you can rewrite assert!(condition);
to [()][!condition as usize];
for now.
Sure, but you can rewrite assert!(condition); to [()][!condition as usize]; for now.
It's not the assert I was thinking of, it's the pointer equality test in your condition. Pointer equality is evil and I'd prefer if we could not allow it in CTFE.
EDIT: Never mind, I just realized the assert
tests the offset. So in fact it can never fail at CTFE time because if the pointers are not in the same block when doing wrapping_sub
, miri will error out.
// guess we can't have this as const fn
fn is_eq<T>(a: &T, b: &T) -> bool {
a as *const _ == b as *const _
}
as I said before, use virtual pointers in miri, rather than real pointers. it can provide const determinism at const time. if the function is written correctly, runtime and compile-time behaviour should produce the same results regardless of runtime being non-deterministic while compile-time is deterministic. you can have deterministic behaviour in non-deterministic environments if you code for it.
getting a field offset is deterministic. with virtual pointers, it stays deterministic. with real pointers, it's still deterministic.
getting the even-ness of the 14th bit of a pointer is not deterministic. with virtual pointers, it becomes deterministic. with real pointers, it's not deterministic. this is fine, because one is happening at compile-time (a deterministic environment), while the other is happening at runtime (a non-deterministic environment).
const fn should be as deterministic as the environment they're being used in.
@SoniEx2
// guess we can't have this as const fn
Indeed we cannot. I think I could live with a version of raw pointer comparison that errors out if either pointer is not currently dereferencable in the sense of being within an allocated object (but that's not what "full miri" currently implements). However, that would still make is_eq
not "const safe" because if T
is zero-sized, it could point one past the end of an object even if we only consider safe code.
C++ allows comparing pointers one past the end to yield an indeterminate (think: nondeterministic) result. Both C and C++ allow comparing a dangling pointer to yield an indeterminate result. It's not clear what LLVM guarantees but I'd rather not bet on guarantees that exceed what they have to guarantee for C/C++ (the weaker of the two, if they differ). This is a problem if we want to guarantee run-time determinism for everything that successfully executes in CTFE, which I think we do.
@RalfJung
The difference is in whether the function even completes execution.
Devils advocate: "returning ⊥" in one case is the same as it having different results.
I don't think that's hard to explain.
Personally I view it as surprising behavior; You can explain it, and I might understand (but I am not representative...), but it does not fit my intuition.
@oli-obk
So a function called with arguments at run time is only guaranteed purity if it actually terminates if evaluated at compile time with the same arguments?
Personally, I don't find this guarantee sufficient. I think we should first see how far we can get with purity and only when we know it is crippling in practice should we move towards weaker guarantees.
@RalfJung
that would provide a guarantee that a safe const fn will be deterministic at run-time, because it matches the successful CTFE execution.
OK; You lost me; I don't see how you arrived at this "safe const fn is deterministic" guarantee given the two premises; could you elaborate on the reasoning?
If this is a guarantee we care about, it does not seem unreasonable to me to make the type system more strict when checking CTFE functions; after all, we want to use it to do more things (checking "const safety"). I think it would make a lot of sense, then, to declare ptr-to-int casts unsafe in const context, arguing that this is needed because the const context makes additional guarantees.
Just like our normal "runtime safety" can be subverted by unsafe code, so can "const safety", and that's fine. I don't see any problems with unsafe code being able to still do ptr-to-int casts via unions -- that's unsafe code, after all. In this world, the proof obligations for unsafe code in a const context are stronger than the ones in a non-const context; if your function returns an integer you have to prove that this will always be a
PrimVal::Bytes
and never be aPrimVal::Ptr
.
These paragraphs are music to my ears! ❤️ This does seem to ensure determinism ("purity")? and is precisely the thing I had in mind earlier. I think const safety is also a terrific term!
For future reference, let me call this guarantee "CTFE soundness": If CTFE does not error, then its behavior matches the run-time -- both diverge, or both finish with the same value. (I am entirely ignoring higher-order return values here.)
@Centril
Devils advocate: "returning ⊥" in one case is the same as it having different results.
Well, that's clearly a matter of definition. I think you understood the CTFE soundness guarantee I was describing and you agree it is a guarantee we want; whether it is all we want is up for discussion :)
OK; You lost me; I don't see how you arrived at this "safe const fn is deterministic" guarantee given the two premises; could you elaborate on the reasoning?
Let's say we have some call to foo(x)
where foo
is a safe const function and x
is a const-valid value (i.e., it is not &y as *const _ as usize
). Then we know that foo(x)
will execute in CTFE without raising an error, by const safety. As a consequence, by CTFE soundness, at run-time foo(x)
will behave the same way it did in CTFE.
Essentially I think I decomposed your guarantee into two pieces -- one ensuring that a safe const fn will never attempt to do something that CTFE does not support (like reading from stdin, or determining whether the least significant byte of a pointer is 0), and one ensuring that whatever CTFE does support matches runtime.
These paragraphs are music to my ears! heart This does seem to ensure determinism ("purity")? and is precisely the thing I had in mind earlier. I think const safety is also a terrific term!
Glad you like it. :) This means I finally understand what we are talking about here. "purity" can mean so many different things, I often feel a little uneasy when the term is used. And determinism is not a sufficient condition for const safety, the relevant criterion is whether execution in CTFE raises an error. (One example of a deterministc non-const-safe function is my variant of your orcale
multiplied by 0. This is not okay to do even using unsafe code as miri will error out when inspecting the bytes of a pointer, even if the bytes ultimately do not matter. It's like the operation that extracts the least significant byte of a pointer is "const-UB" and hence not allowed even in unsafe const code.)
yes, a pointer one past the end of an array element would point to the next array element, probably. so what? it's not really nondeterministic? deterministic at compile-time is all that matters anyway. as far as I care, runtime evaluation could segfault for stack exhaustion.
@RalfJung
The cost of such a system would be that a safe higher-order
const fn
must take a const fn closure to be able to guarantee that executing the closure will not, itself, violate "const safety". Such is the price of actually obtaining proper guarantees about safeconst fn
.
I believe this can be severely mitigated to support transforming most existing code by introducing ?const
where you can write higher order functions whose result can be bound to a const
if and only if the function provided is ?const fn(T) -> U
and where is_const(x : T)
; So you have:
?const fn twice(fun: ?const fn(u8) -> u8) { fun(fun(42)) }
fn id_impure(x: u8) -> u8 { x }
const fn id_const(x: u8) -> u8 { x }
?const fn id_maybe_const(x: u8) -> u8 { x }
fn main() {
let a = twice(id_impure); // OK!
const b = twice(id_impure); // ERR!
let c = twice(id_const); // OK!
const d = twice(id_const); // OK!
let e = twice(id_maybe_const); // OK!
const f = twice(id_maybe_const); // OK!
}
I'll write up an RFC proposing something to this effect (and more) in a week or so.
@Centril at this point you are developing an effect system with effect polymorphism. I know that was always your secret (?) agenda, just letting you know it's getting blatantly obvious.^^
@RalfJung I already revealed the secret at https://github.com/rust-lang/rfcs/pull/2237 last year but I'll have to rewrite it ;)
Pretty much public domain now ^,-
@SoniEx2
yes, a pointer one past the end of an array element would point to the next array element, probably. so what? it's not really nondeterministic? deterministic at compile-time is all that matters anyway. as far as I care, runtime evaluation could segfault for stack exhaustion.
The problem is in situations like the following (in C++):
int x[2];
int y; // let's assume y is put right after the end of x in the stack frame
if (&x[0] + 2 == &y) {
// ...
}
C compilers want to (and do!) optimize that comparison to false
. After all, one pointer points into x
and one into y
, so it is not possible for them to ever be equal.
Except, of course, that the addresses are equal on the machine because one pointer points right at the end of x
, which is the same address as (the beginning of) y
! So, if you obscure the code enough such that the compiler does not see any more where the addresses come from, you can tell that the comparison evaluates to true
. The C++ standard hence allows both results to occur nondeterministically, justifying both the optimization (which says false
) and the compilation to assembly (which says true
). The C standard does not allow this, making LLVM (and GCC) non-conforming compilers as both will perform these kinds of optimizations.
I wrote a summary of my ideas of const safety, const soundness etc. that have come up here in this thread and/or in related discussion on IRC: https://www.ralfj.de/blog/2018/07/19/const.html
This issue here has become somewhat hard to disentangle because so many things have been discussed. @oli-obk helpfully created a repo for const-eval concerns, so a good place to discuss specific sub-issues is probably the issue tracker of https://github.com/rust-rfcs/const-eval.
@Centril suggested to stabilize a minimal version that is forward compatible to any future extensions:
fn
pointer or dyn Trait
typeunion
s which are already behind an extra feature gate, and raw pointer derefs (generally forbidden in any constant right now). Any other unsafe code needs to go through other unsafe const fns or const intrinsics, which require their own discussion wrt stabilization.(nit: my suggestion also included a recursive check for fn
pointers or dyn Trait
on the return type of const fn
s)
no generic arguments with trait bounds
To clarify, would something like this be accepted or not?
struct Mutex<T> where T: Send { /* .. */ }
impl<T> Mutex<T> where T: Send {
pub const fn new(val: T) -> Self { /* .. */ }
}
The bound is not part of the const fn
itself.
Also to clarify: is the proposal to stabilize those things and leave the rest behind the feature gate OR to stabilize those and make the rest an error altogether?
@mark-i-m the rest would stay behind a feature gate.
@oli-obk what is the problem with unsafe code? We do allow unsafe in const X : Ty = ...
, which has all the same problems. I think const fn
bodies should be checked exactly like const
bodies.
I think we do want to remain conservative around "unconst" operations -- operations that are safe but not const-safe (basically anything on raw pointers) -- but those are already disallowed entirely in const context, right?
The bound is not part of the const fn itself.
No, bounds on the impl block would also not be allowed under that proposal
what is the problem with unsafe code?
I don't see any problems as noted in my comment. Every const unsafe feature/function needs to go through their own stabilization anyway.
@RalfJung I think the problem is "@Centril is nervous that we missed something wrt. those are already disallowed entirely in const context". ;) But we have to stabilize unsafe { .. }
in const fn
at some point so if you are sure there are no problems and that we caught all unconst operations then let's do it?
Moreover, if we missed something, we're already kind of screwed as people can use it on const
.
I still plan to write a PR filling in the const safety / promotion parts of the const fn RFC repo; that would be the time to check carefully if we covered everything (and have testcases).
Another thing that came up on Discord are FP operations: We cannot currently guarantee that they will match real hardware. CTFE will follow IEEE exactly, but LLVM/hardware might not.
This also applies to const
items, but those will never be runtime-executed -- whereas const fn
might be. So it seems prudent to not stabilize FP operations in const fn
.
OTOH, we already promote results of FP operations? So there we already have that run-time/compile-time mismatch observable on stable. Would it be worth a crater run to see if we can undo that?
For future reference, the following article is relevant with respect to floating points and determinism:
@RalfJung
Would it be worth a crater run to see if we can undo that?
I would be surprised if we could do this, but it is worth a try at least. :)
@RalfJung there might be interesting input further up in this thread https://github.com/rust-lang/rust/issues/24111#issuecomment-386764565
Assuming we want to keep the keyword form const fn
, I think stabilizing something now, that's limited enough, is a pretty decent solution (how did I not see it before?!)
When we do these piecemeal stabilizations, I just want to register a
request for a clear list of the restrictions and their justifications. It's
going to be frustrating when users make a seemingly innocuous change and
run into a a compile error, so we can at least make the errors good. I
recognize the reasoning is spread across many discussions, not all of which
I've followed, but I think there ought to be a table in the docs (or the
reference, or the nomicon, at least) listing each disallowed operation, the
problem it could cause, and the prospects for stabilization (e.g. "never",
"if RFC XYZ is implemented", "after we nail down this part of the spec").
On Mon, Aug 20, 2018 at 1:44 PM Eduard-Mihai Burtescu <
[email protected]> wrote:
Assuming we want to keep the keyword form const fn, I think stabilizing
something now, that's limited enough, is a pretty decent solution (how
did I not see it before?!)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rust-lang/rust/issues/24111#issuecomment-414403036,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC3n80yVxIa3agsJP4wXZFkgyHVmAsjks5uSvWXgaJpZM4D66IA
.
@est31 As @rkruppe already wrote, those fuses would be illegal to perform without -ffast-math
-- and I think LLVM handles that correctly.
From what I recall, basic arithmetic is not even all that problematic (except on 32bit x86 because x87...), but transcendental functions are. And those are not const fn
, right? So my hope would be that in the end we can have parity between const
items, promotion and const fn
in this regard as well.
@RalfJung I'm still not convinced that e.g. spilling and then loading it back into the FPU registers in between some operations gives the same results as the same computation without that spilling.
From what I recall, basic arithmetic is not even all that problematic (except on 32bit x86 because x87...), but transcendental functions are.
How would transcendental functions mean a problem?
IIRC, while some transcendental functions are supported by common x86 processors, those functions are slow and eschewed, and only included for completeness and compatibility with existing implementations. Thus, almost everywhere, transcendental functions are expressed in terms of combinations of basic arithmetic functions. This means that there's no difference in their referential transparency to that of arithmetic functions. If basic functions are "safe" then anything built on them is, including transcendental functions. The only source of "referential intransparency" here might be different approximations (implementations) of those transcendental functions in terms of those basic arithmetic functions. Is that the source of the problem?
@est31 While most transcendental functions are ultimately just library code composed of primitive integer and float operations, these implementations are not standardized and in practice a Rust program can interact with maybe three different implementations throughout its lifetime, some of which also vary by host or target platform:
rustc_apfloat
, or interpreting a Rust implementation such as https://github.com/japaric/libm/)If any of these disagree with each other, you can get different results depending on when an expression is evaluated.
@rfcbot cancel
We are unlikely to stabilize the full monty in the near future.
Instead, I'd like to develop consensus for a more minimal subset (as roughly outlined in https://github.com/rust-lang/rust/issues/24111#issuecomment-414310119) that we hopefully can stabilize in the near term.
This subset is tracked in #53555. Further description is available there.
@Centril proposal cancelled.
@rkruppe is there a reason for lowering the transcendental functions to llvm intrinsics? Can't we just avoid the entire problem by lowering them to well known, rust-only implementations that we control and that are the same on all platforms?
is there a reason for lowering the transcendental functions to llvm intrinsics?
Besides the simplicity of not implementing a whole cross platform libm, the intrinsics have advantage in LLVM's optimizer and codegen that an ordinary library function won't get. Obviously constant folding (& related things like value range analysis) is a problem in this context but it's quite useful otherwise. There are also algebraic identities (applied by the SimplifyLibCalls pass). Finally, a few functions (mostly sqrt and its reciprocal, which are not transcendental but whatever) have special code generation support to e.g. generate sqrtss
on x86 with SSE.
Can't we just avoid the entire problem by lowering them to well known, rust-only implementations that we control and that are the same on all platforms?
Even ignoring all of the above, this is not a great option IMO. If the target platform has a C libm, it should be possible to use it, either because it's more optimized or to avoid bloat.
the intrinsics have advantage in LLVM's optimizer and codegen that an ordinary library function won't get.
I've thought that such optimizations are only being enabled if fast math is turned on?
If the target platform has a C libm, it should be possible to use it, either because it's more optimized or to avoid bloat.
Sure. There should always be an option to trade this rather academic property -- referential transparency -- in favour of things that matter more like improved speed of the compiled binary or smaller binaries. E.g. by using platform libm
or by turning on fast math mode. Or do you suggest that we should disallow fast math mode in all eternity?
IMO we shouldn't disallow f32::sin()
in const contexts, at least not if we are allowing +
, -
etc.. Such a ban will force people to create and use crates that provide const-compatible implementations.
I've thought that such optimizations are only being enabled if fast math is turned on?
Constant evaluation of these functions and emission of sqrtss
is easy to justify without -ffast-math, since it can be made correctly rounded.
Or do you suggest that we should disallow fast math mode in all eternity?
I am not suggesting anything, nor do I have an opinion (atm) on whether such a property should be guaranteed. I am simply reporting constraints.
Couldn't find an open issue for ICE caused by Vec
in const fn context.
Should I open a new issue for this?
@Voultapher yeah that looks like a new ICE.
Alright opened #55063.
If the compiler is able to check whether a function can be called in a compile-time constexpr when a user annotates it with const fn
, why not just automatically perform the check on all functions? (similar to auto-traits)? I can't think of any real adverse affects, and the clear benefit is that we don't have to depend on error-prone human judgement.
The major downside is that it becomes a public detail, so an implementation
change in a function not meant to be const is now breaking.
Also we don't need to rely on human judgement. We can have a clippy lint that tells us when an unannotated function could be const fn
: https://github.com/rust-lang/rust-clippy/issues/2440
This is similar to how we don't infer mutability of local variables, but instead have the compiler tell us where to add or remove mut
.
@remexre const fn
acts as an interface specification. I'm not very acquainted with the tiny details of this feature (and maybe what follows here is already thought out), but two cases where I can think of the compiler telling when a function is incorrectly anotated as const
is to fail compilation if such function takes a &mut
as a parameter or if it calls other non-const
functions. So if you change the implementation of a const fn
and you break those restraints, the compiler will stop you. Then you can choose to implement the non-const
bits in (a) separate function(s) or break the API if that was an intended change.
There's another middle point that I haven't seen discussed about and it's the possibility to introduce an opposite of this marker and some sort of "function purity inference" when it's not explicitly set. Then the docs would show the actual marker but with some sort of warning about not guaranteeing the stability of that marker if it's a const
. The problem is that this might encourage being lazy and doing this almost every time, which is not its purpose.
should a const fn be able to produce output? why should &mut
be disallowed?
@aledomu My comment was directed at @AGaussman; I'm talking about the case where a library author exposes a function that's not "meant to be" const (in that the const-ness is not intended to be part of the API); if const were to be inferred, it would be a breaking change to make said function non-const.
@SoniEx2 const fn
is a function that can be evaluated at compile time, which happens to be the case only of any pure function.
@remexre If it's not meant to be a stable part of the API, just don't mark it.
For the inference bit I commented, that's why I mentioned the need for some warning on the crate docs.
what's the difference? absolutely none!
const fn add_1(x: &mut i32) { x += 1; }
let mut x = 0;
add_1(&mut x);
assert_eq!(x, 1);
x = 0;
add_1(&mut x);
assert_eq!(x, 1);
const fn added_1(x: i32) -> i32 { x + 1 }
let mut x = 0;
x = added_1(x);
assert_eq!(x, 1);
x = 0;
x = added_1(x);
assert_eq!(x, 1);
I've filed targeted issues for:
The following targeted issues already exist:
usize
casts: https://github.com/rust-lang/rust/issues/51910&mut T
references and borrows: https://github.com/rust-lang/rust/issues/57349If there are other areas, not already tracked by other issues, that need to be discussed wrt. const eval and const fn
, I suggest that people make new issues (and cc me + @oli-obk in them).
This concludes the usefulness of this issue, which is hereby closed.
I don’t have all the specifics in mind, but isn’t there a lot more that’s suppported by miri but not enabled yet in min_const_fn
? For example raw pointers.
@SimonSapin Yeah, good catch. There are some more existing issues for that. I've updated the comment + issue description. If there's something not covered that you happen upon, make new issues please.
I think it’s not appropriate to close a meta tracking issue when it’s not at all clear that what it covers is exhaustively covered by more specific issues.
When I remove #![feature(const_fn)]
in Servo, the error messages are:
trait bounds other than `Sized` on const fn parameters are unstable
function pointers in const fn are unstable
(These const fn
s are all trivial constructors for types with private fields. The former message is on the constructor of struct Guard<T: Clone + Copy>
, even though Clone
is not used in the constructor. The latter is for initializing Option<fn()>
(simplified) to either None
or Some(name_of_a_function_item)
.)
However, neither traits nor function pointer types are mentioned in this issue’s description.
I don’t mean we should have just two more specific issues for the above. I mean we should reopen this one until we somehow ensure that everything behind the const_fn
feature gate (which still points here in error messages) has a tracking issue. Or until const_fn
is fully stabilized.
@SimonSapin
I think it’s not appropriate to close a meta tracking issue when it’s not at all clear that what it covers is exhaustively covered by more specific issues.
This issue has the flavor of https://github.com/rust-lang/rust/issues/34511 which is one of the biggest messes there are as far as tracking issues go. This issue has also been a free-for-all for some time so it doesn't act as a meta-issue right now. For such free-for-alls, please use http://internals.rust-lang.org/ instead.
However, neither traits nor function pointer types are mentioned in this issue’s description.
I don’t mean we should have just two more specific issues for the above.
That's exactly what I think should be done. From a T-Lang triage perspective, it is favorable to have targeted and actionable issues.
I mean we should reopen this one until we somehow ensure that everything behind the
const_fn
feature gate (which still points here in error messages) has a tracking issue. Or untilconst_fn
is fully stabilized.
It's not even clear to me what const_fn
the feature gate even constitutes or that it will all be stabilized at some point. Everything besides bounds and function pointers from the original RFC has open issues and then some.
It's not even clear to me what const_fn the feature gate even constitutes
That’s exactly why we shouldn’t close it until we figure that out, IMO.
Everything
Is it really everything, though?
Does someone know what happened to the const_string_new
feature? Is there a tracking issue for it? The unstable book just links here.
@phansch That's because all rustc_const_unstable
point here. (cc @oli-obk can we fix that?)
The issue should be open then. It's just insulting as a user to be pointed
to a closed issue.
On Wed, Jan 9, 2019, 04:05 Mazdak Farrokhzad <[email protected]
wrote:
@phansch https://github.com/phansch That's because all
rustc_const_unstable point here.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rust-lang/rust/issues/24111#issuecomment-452622097,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC3n7JhzsZZpizmWlp0Nww5bcfIqH2Vks5vBbC8gaJpZM4D66IA
.
@durka: There's always going to be a possible window where something is closed in nightly and the resolution still hasn't landed in stable. How is that insulting?
I've been resisting commenting here, and maybe we should move this conversation to a thread on internals (is there one already?) but...
The decision to close this makes no sense to me. It's a tracking issue, because it shows up in error messages from the compiler, and it's not alone, see this post for some more examples: https://internals.rust-lang.org/t/psa-tracking-for-gated-language-features/2887. Closing this issue to me implies stability, which is obviously not yet the case.
I frankly can't see an argument for closing this... I'm glad more targeted issue now exist, so implementation can move forward, with hopefully new discussion and focus, but I don't see a clear way to associate the compiler messages with those.
Again, if this needs (or already has) a thread on internals maybe let's move this conversation there?
EDIT: Or is the issue just that the book is outdated? Trying the example from the RFC (it's missing a couple #[derive(...)]
s) seems to work without errors on Rust rustc 1.31.1. Are there still compiler error message pointing here? It would be nice to have a place to link errors like:
error: only int, `bool` and `char` operations are stable in const fn
If we want to have them linked to the specific issues that would be an improvement possibly.
Ok, so here should be some strong evidence for this issue remaining open. As far as I can tell this:
is the only active
feature that points to a closed issue.
In an ideal world I believe these kinda of discussions should really be automated, since as we've discovered, people have varying opinions and ideas about how things should work. But that's really not a conversation for this thread...
If we want to have them linked to the specific issues that would be an improvement possibly.
Yes, this is the correct solution, and what @Centril already suggested.
The initial comment has also been edited to redirect people to the specific issues who arrive here in the "window" that @ErichDonGubler mentions.
https://github.com/rust-lang/rust/issues/57563 has now been opened to track the remaining unstable const features.
Someone could edit the issue body here to prominently link to #57563 then?
@glaebhoerl done :)
Hi, I got here because I got error[E0658]: const fn is unstable (see issue #24111)
when compiling ncurses-rs. What should I do? Upgrade rust? I've got
$ cargo version
cargo 1.27.0
$ rustc --version
rustc 1.27.2
EDIT: did brew uninstall rust
and followed rustup install instructions, now rustc --version
is rustc 1.33.0 (2aa4c46cf 2019-02-28)
and that error went away.
Yes, in order to be able to use const fn
on stable, you'll need to update your compiler.
Most helpful comment
Please ignore this if I am completely off point.
The problem I see with this RFC is that as a user, you have to mark as many function
const fn
as possible because that will probably be the best practice. The same thing is happening currently in C++ with contexpr. I think this is just unnecessary verbosity.D doesn't have
const fn
but it allows any function to be called at compile time ( with some exceptions ).for example
Note, I am not really a Rust user and I have only read the RFC a few minutes ago, so it is possible that I might have misunderstood something.