Rust: Tracking issue for `Reflect` stabilization

Created on 12 Aug 2015  路  48Comments  路  Source: rust-lang/rust

The Reflect trait is currently unstable. It was intended to narrow the use of things like Any for runtime introspection, to help retain parametricity properties. However, with impl specialization these properties would go away anyhow, so it's not clear how useful the trait would be.

Thus, stabilization should wait at least until specialization is settled.

cc @reem @nikomatsakis @pnkfelix

B-unstable T-lang T-libs final-comment-period

Most helpful comment

That means we will permit language features that let you write a function ... which e.g. prints its argument using Display if available, otherwise Debug if available, otherwise prints a generic message.

In my opinion, strict parametricity is not so important, but explicit, algorithmic reflection almost always is a band-aid over a wrong abstraction which results in increasingly unmaintainable spaghetti. Having automatically determined aparametricity during monomorphization (so specialization) is fine, in my opinion, but making explicit downcasting a normal and commonly used part of the language would be a very negative change. The fact that its so hard to do this is one of the language's key selling points to me.

That is, I'm comfortable with this:

default fn print<T>(t: &T) { ... }
default fn print<T: Debug>(t: &T) { ... }
default fn print<T: Display>(t: &T) { ... }
fn print<T: Debug + Display>(t: &T) { ... }

But I'm uncomfortable with something like this:

fn print<T>(t: &T) {
    if T::implements::<Display>() { ... }
    else if T::implements::<Debug>() { ... }
    else { ... }
} 

Partly this is an intuitive judgment, but some obvious advantages of the first:

  1. The compiler can automatically check your specialization hierarchy - you have to think explicitly about "what if T: Debug + Display?" and "what if T: !Debug + !Display?" or your code won't compile.
  2. No conditional thickets.

All 48 comments

I am in favor of it going away, I've had no use for it (and have heard of none) other than as necessary boilerplate whenever I want to do downcasting. I know of absolutely no types either in std or in the ecosystem which opt out of Reflect (and I can't really think of a good reason to do so).

I know of absolutely no types either in std or in the ecosystem which opt out of Reflect (and I can't really think of a good reason to do so).

This is intentional. The point is precisely to prevent downcasting of opaque type parameters.

I agree that the outcome of specialization is the key question.

Nominating for 1.6 discussion

@alexcrichton I do not understand, isn't the situation still :

Thus, stabilization should wait at least until specialization is settled.

(And, to my knowledge, specialiation is not settled?)

That was part of the discussion I'd like to have :)

With nonparametric dropck I wanted to check in on this and see if we could deprecate/remove, but if it still has to do with specialization then I think we'd just keep punting.

@alexcrichton Yeah, it's tied to specialization. The change to dropck was a conservative one that will make room for specialization but doesn't imply that we give up on parametricity.

Alex has been taught the error of his ways, so this issue is not moving into FCP this cycle.

We discussed this in the @rust-lang/lang meeting. We would like to move this issue to final-comment period with the intention to deprecate and remove the Reflect trait. This represents a decision to move away from parametricity; we started down this road when accepting the specialization RFC (where there was much discussion). This is also relevant to mem::discriminant (rust-lang/rfcs#1696).

Note: This final-comment-period lasts for (roughly) the current release cycle, which began on Aug 18, 2016.

On the general topic of parametricity, there have been numerous comments. In an effort to summarize the conversation, I wanted to link to some of the most salient exchanges:

  • The Reflect bound was being used for the discriminant_value intrinsic (#24263) and was removed for the proposed mem::discriminant alternative (https://github.com/rust-lang/rfcs/pull/1696):

    • the bound existed because "discriminant-value" allowed one to extract information (the index of the variant) not normally available from a value of type T (where T is a type parameter), as explained by @glaebhoerl here

    • also in that same comment, @glaebhoerl explains that because specialization is not yet stable, it would perhaps be "improper" to consider parametricity removed. (I was sympathetic to this POV, and hence the decision to initiate a formal decision to remove Reflect.)

  • The specialization RFC was where most discussion about parametricity took place.

Reviewing this thread did reveal one interesting thing to me: the opt-in proposal still relied on the Reflect trait being discussed here, but it made the trait into an implicitly present bound, much like Sized, as opposed to the current bound (i.e., one would have to write T: ?Reflect to avoid a T: Reflect bound). The intention was that it could be introduced backwards compatibly.

Also, @eddyb has pointed out a few times that the current Reflect trait has the interesting property that individual types can, in principle, "opt-out" of reflection, though that has no effect on specialization.

@nikomatsakis Do you mean specialization can "remove" the requirement? Because that's unsound (or rather, it would be for Send and Sync).

@eddyb

Do you mean specialization can "remove" the requirement? Because that's unsound (or rather, it would be for Send and Sync).

What I meant is that the specialization RFC does not mention the Reflect trait -- hence one can "downcast" during monomorphization without regard for whether code has opted in or out of Reflect.

I am not sure of what relationship you see between Reflect and Send and Sync though, can you elaborate? Am I forgetting about something?

@nikomatsakis Ah, sure, I was talking about a Reflect bound, e.g. on Any - specialization shouldn't be able to work around the fact that I _denied it_ for my type.
Now the usefulness of that feature is orthogonal, but it is there.

This represents a decision to move away from parametricity; we started down this road when accepting the specialization RFC (where there was much discussion).

I'm curious what this means in the larger picture, not specific to this tracking issue.

@withoutboats In a nutshell, it means that Rust will not attempt to ensure traditional parametricity for type arguments (i.e. that generic code behaves "equivalently" when instantiated with different types). This is already formally the case due to the ability to get the size of a type parameter, but we're opening the door to observable, behavioral differences under different type arguments.

That means we will permit language features that let you write a function like:

fn print<T>(t: &T) { ... }

which e.g. prints its argument using Display if available, otherwise Debug if available, otherwise prints a generic message. Whereas in today's Rust, you might assume that such a function must behave more "uniformly" for all T (and in fact that it couldn't interact meaningfully with its argument).

The tradeoff is:

  • Benefit: additional expressiveness, of course. In particular, in the specialization thread I and others argued that this kind of "static reflection" is essential for reaching true zero-cost abstraction. A possible counter-argument: it could be treated as an unsafe feature, where marking as safe requires ensuring that there's no "observable" difference in behavior, just better performance. But of course there are use cases like print above where different behavior is the whole point. See the specialization thread for more.
  • Downside: the signature of a function tells you less about that function's behavior. That is, you can make certain assumptions about a signature like print in Rust today, e.g. that it cannot access anything "inside" its argument -- and that reasoning would be invalidated. Counter-argument: in almost all cases you need a contract for a function that goes beyond what the types say anyway; if this kind of lack of reflection is important, just make it an explicit part of the function's contract. In particular, the vast majority of generic functions will still behave parametrically, and those that don't are likely to be fairly obvious or otherwise marked.

Note that all of the above is specific to _type_ parameters. _Lifetime_ parameters, on the other hand, are intended to remain parametric -- e.g. you can't change the behavior of your function depending on whether a given lifetime is 'static or not. That's important for functions like Ref::map.

That means we will permit language features that let you write a function ... which e.g. prints its argument using Display if available, otherwise Debug if available, otherwise prints a generic message.

In my opinion, strict parametricity is not so important, but explicit, algorithmic reflection almost always is a band-aid over a wrong abstraction which results in increasingly unmaintainable spaghetti. Having automatically determined aparametricity during monomorphization (so specialization) is fine, in my opinion, but making explicit downcasting a normal and commonly used part of the language would be a very negative change. The fact that its so hard to do this is one of the language's key selling points to me.

That is, I'm comfortable with this:

default fn print<T>(t: &T) { ... }
default fn print<T: Debug>(t: &T) { ... }
default fn print<T: Display>(t: &T) { ... }
fn print<T: Debug + Display>(t: &T) { ... }

But I'm uncomfortable with something like this:

fn print<T>(t: &T) {
    if T::implements::<Display>() { ... }
    else if T::implements::<Debug>() { ... }
    else { ... }
} 

Partly this is an intuitive judgment, but some obvious advantages of the first:

  1. The compiler can automatically check your specialization hierarchy - you have to think explicitly about "what if T: Debug + Display?" and "what if T: !Debug + !Display?" or your code won't compile.
  2. No conditional thickets.

making explicit downcasting a normal and commonly used part of the language would be a very negative change

I entirely agree. (I think there are some occasional uses for it, but it should be a very niche part of the Rust programmer's toolkit.)

But that's not what's being debated here. The question is whether we retain what you call "strict parametricity".

But that's not what's being debated here. The question is whether we retain what you call "strict parametricity".

Cool. I guess since this trait doesn't impact specialization, I thought Niko's comment referred to some other plans.

On further, ahem, reflection, I think I understand your concern, @withoutboats.

Reflect is currently mostly tied to our one downcasting facility, and so it might seem to act as a speedbump for doing downcasting. While that's true, the _intent_ was actually to retain some form of "strict" parametricity, by forcing you to signal when even "static reflection" was occurring.

So we've been talking about this in terms of strict parametricity, but we should perhaps think about its implications for encouraging downcasting.

Given the way the setup currently works, I don't actually think the Reflect trait provides much of a speedbump for using downcasting. Basically all concrete types are automatically Reflect anyway. So this just acts as a signal that you're doing something connected with reflection (static or dynamic) and generic type parameters. In practice, people end up bounding by Any anyway.

I agree, and I think deprecated Reflect is probably the right path. I think the main thing Reflect does is make downcasting more confusing, because its another trait you have to understand the role of. One of the big problems with downcasting is that it can be confusing, so if anything this exacerbates the problem, rather than being a speedbump.

I was just asking about whether or not Niko's comment indicated there were other plans that would encourage the use of Any reflection.

To be clear, AFAIK you can't quite implement @aturon's print because the impls for T: Debug and T: Display conflict (as @withoutboats said, "what if T: Debug + Display?"). So it's not throwing out all checks.

@durka Yep, you need something like "lattice impls" to make it work.

I came up with this.

@durka There's a proposed extension to specialization that would make it work, what @aturon referred to as 'lattice impls,' which would allow you to have diamond specializations like this as long as you've completely accounted for the area of overlap with a _more specialized_ impl.

That extension seems fine to me - as long as you are forced to make an explicit decision about every possible case, specialization should be as flexible as possible. The problem with something like downcasting is that you aren't required to think about the case, so if you don't think about it the answer depends on the order you wrote the conditionals in.

That is, I don't like that these two implementations behave differently. I think that polymorphism should be code order independent. (existing Any downcasting doesn't have this problem because Rust doesn't have subtyping between 'static types).

fn print<T>(t: &T) {
    if T::implements::<Display>() { ... }
    else if T::implements::<Debug>() { ... }
    else { ... }
} 
fn print<T>(t: &T) {
    if T::implements::<Debug>() { ... }
    else if T::implements::<Display>() { ... }
    else { ... }
} 

@withoutboats I did not mean to imply that we had particular plans to encourage downcasting and I share your distaste for it. Or, more explicitly, I tend to prefer downcasting when it is used with other features that encourage exhaustive handling (e.g., a match statement downcasts an enum, but it requires you to write a _ case; same with specialization, which requires you to write what happens for all T). Ultimately this doesn't stop people from writing fragile code of course, but I find it a helpful reminder (also during code reviews). (I'm not keen on the overuse of if let for similar reasons, since I view that as effectively a non-exhaustive downcast.)

The libs team discussed this at triage we we approve deprecation

The lang team has likewise settled on deprecation. Not much new has emerged since FCP was announced; you can read a summary of the thinking in @nikomatsakis's comment (and the one below it) and a bit more detail in my comment.

I admit I have no horse in this race since I haven't gotten to use Rust for any projects, but I do follow the theoretical side of Rust. This discussion of introducing typecase puzzles me, particularly because Rust already supports ad-hoc overloading via traits, so why introduce yet another way to overload, and via a mechanism that is more limited to boot, ie. typecase is closed where traits are open?

With impl specialization seemingly merged, zero-cost open overloading already seems available, so what's the real use for typecase, aside from reducing a little typing by avoiding trait and impls?

Any is dynamic, while impl specialization is fully statically dispatched.

You can already use TypeId for compile time ad hoc special casing depending on type.

@eddyb, if that was a reply to me, then:

  1. dynamic type checks seem to violate the desired zero-cost abstraction principle
  2. dynamic type class dispatch is possible in Haskell via existentials, so I can't imagine Rust isn't able to express something similar

Rust supporting dynamic multidispatch and specialization at the language level (2) would cause 1.
At this moment Any is a library construct based on a _static_ TypeId primitive.
There is no dynamic type checking in the language, only barebones trait objects (existentials over Self).
You shouldn't use something like Any in Rust as anything other than a last resort.

Yes (2) implies (1). An if-chain discriminating on TypeId also implies (1), but at least a) you didn't introduce a new means of overloading, b) the fact that the behaviour is overloaded is now visible in the trait constraint, c) the specialization is now extensible to other types besides the "blessed" closed family of types.

What am I missing?

I don't understand, are you arguing _against_ specialization as a whole?
You can use specialization to hide _any_ trait constraint you want.
Including, uhm, Reflect itself, so someone can make their own unconstrained Any.

Will it affect rustc-serialize?

@e-oz Can you elaborate the question? Reflection in Rust allows some dynamic type checks, but it does not have any structural information like enumeration of struct fields or so, which I imagine could be what you mean?

@bluss I didn't know it and thought rustc-serialize does use reflection. Thanks for clarification :)

Ah the deprecation here has hit stable, so this issue is resolved!

I'm a bit confused. feature_gate.rs still lists reflect as "active", aka, non removed. Also, it lists this issue as its tracking issue, but this tracking issue claims that Reflect got removed. What to do?

I'm asking because of issue #39059.

@est31 the trait itself was deprecated but not removed. As it was never stable, we _can_ remove it in Rust 1.x. And it was deprecated in 1.14, so maybe we can consider removing it in 1.16.

But I think the feature gate stays "active" until then, as you can still mention Reflect if you put #![feature(reflect_marker)] #![allow(deprecated)] in your crate. Someone correct me if I'm wrong.

If we're going to remove Reflect and the feature entirely (I support doing that) we should do a crater run to see what impact it has.

@withoutboats I've created a PR to do such a crater run on: #39075

(I don't have any opinion on whether to remove Reflect or not).

I started a crater run on @est31's PR.

crater report by @brson on the PR above.

I too am confused on why this issue is closed. I'm going to open it. =) I think we can remove Reflect altogether.

Was this page helpful?
0 / 5 - 0 ratings