Tracking issue for https://github.com/rust-lang/rfcs/pull/1561.
Roadmap: https://github.com/rust-lang/rust/issues/35896#issuecomment-277870744.
cc @nrc @jseyfried
One question I don't think was ever answered on the RFC thread is if providing a way for current macros to opt in is feasible.
Just throwing this out there as to make sure it doesn't get forgotten. My vote is obviously for yes we can, but I'm also obviously not on a team, or even an active contributor beyond these discussions.
@camlorn this was actually address in my final edit to the RFC before merging. To summarise, it might be possible but we need implementation experience to be sure and to work out exactly how it might work. So, basically punting on the issue for now. It is certainly an area where I think we need to tread extremely carefully.
c.f. https://github.com/rust-lang/rust/pull/37732#issue-188905131
$crate
with inter-crate re-exports (PR #37463).#[no_link]
crates (PR #37247).path::to::mac!();
) (PR #36662).syntax
, resolve
, and metadata
(PRs #36438, #36525, #36573, #36601, #36616, #37036, #37213, #37292, and #37542).#![feature(use_extern_macro)]
(PRs #37732 and #38082).use
to shadow macros from the prelude, currently is an ambiguity error (PR #40501).rustdoc
, issue #39436 (PR #40814).@jseyfried What's the status of this? Seems like the two items without checks have had their PRs merged. AFAIK the RFC was for this to be in prep for the next iteration of declarative macros, but some of the PRs seem like we could enable it for macro_rules! macros. Is that so? What's the migration story (potential breakages?) etc?
@jseyfried @nikomatsakis I'm also interested in the status of this. Can it be activated for 1.19?
I don't have a good feeling for current status. @jseyfried?
@withoutboats
some of the PRs seem like we could enable it for macro_rules! macros. Is that so?
We can enable it for #[macro_export] macro_rules!
in extern crates (and derive procedural macros in extern crates) but not crate-local macro_rules!
(since crate-local macro_rules!
's scopes are too different from items' scopes).
This works today with #![feature(use_extern_macros)]
, which is implied by #![feature(proc_macro)]
and #![feature(decl_macro)]
(the latter has not yet landed).
@brson @nikomatsakis
IIUC, we reached consensus on the current behavior of #![feature(use_extern_macros)]
in the February design sprint.
That being said, supporting use
for macros from extern crates but not for crate-local macro_rules!
might be confusing for end users, especially since there's no way to define a crate-local macro that can be use
d until declarative macros 2.0 is stable.
I don't have a strong opinion on whether we should stabilize this now, wait for more experience with declarative macros 2.0 and then stabilize, or only stabilize when we stabilize declarative macros 2.0.
We can enable it for #[macro_export] macro_rules! in extern crates (and derive procedural macros in extern crates) but not crate-local macro_rules! (since crate-local macro_rules!'s scopes are too different from items' scopes).
Is it impossible to make crate local macro_rules! macros work with this? Probably we'd need some kind of syntactic change to avoid breakages? (Maybe just a visibility modifier?)
I think declarative macros 2.0 is pretty far off, because there are going to be many design decisions we'll be able to reconsider with the new syntax, and that's going to take time to work through. It'd be nice to support this feature sooner than that.
Is it impossible to make crate local macro_rules! macros work with this? Probably we'd need some kind of syntactic change to avoid breakages? (Maybe just a visibility modifier?)
It is possible with a visibility modifier (e.g. pub macro_rules! m { () => {} }
could have item scope), but then
macro_rules! m { () => {} }
pub custom_macro! { ... }
in general, pub macro_rules!
would be a strange special-casemacro_rules!
is "good enough", reducing macros 2.0 adoption.I'm personally not concerned about making macro_rules "good enough" - I think there will be enough improvements over the current system in macro
macros, and I'm not against just straight up issuing deprecation warnings if you write a macro_rules macro someday. And I'm also alright with the special case.
However, I want the story about when macros use normal namespacing and when they don't to be easy to understand. I think a solution could be to introduce a new attribute which turns this scoping on, and require it (for both local and macro_export'd macros).
Later on we could introduce an attribute to turn old-school macro scoping on, and if we do the epoch shift, we could change which is the default.
@withoutboats Yeah, that makes sense, but I'm not sure it's worth the complexity of another dimension due to making namespacing orthogonal to macros 1.0 vs macros 2.0.
In general I'm opposed to blurring the lines between current macros and new macros. The 'good enough' argument is part of this - I fear that at some point macros 1.0 gets good enough that there is push back against further changes (I don't believe we can simply deprecate if the community doesn't want it). To some extent I think the macros 2.0 changes are a mixture of sugar and medicine - if we add all the nice changes to macros 1.0 (naming, syntax) then it makes it harder to sell the necessary changes which are left (e.g., hygiene).
A bigger worry for me is confusion for users - there is already confusion between old decl macros, new decl macros, procedural macros, etc. I worry that adding more layers to this - old macros with feature X, old macros with feature Y will make this even more confusing. On a technical level, supporting multiple versions of macros with different features opens the gates to many more bugs in the combinations of features which might be less tested.
I don't understand all the parts at play here, but I want to be able to reexpert macro_rules macros from other crates in stdx.
@brson The conclusion of our lang team discussion was that we probably aren't going to migrate to this until macros 2.0.
So right now using the macros of a different crate works by doing:
#[macro_use] extern crate foo;
macro_from_foo!(...);
However, with RFC 2126, extern crate
is being brought onto the path of deprecation, meaning that there will be lints for it and maybe even hard errors in a future epoch.
Now the epochs RFC has made one thing very clear: inter-epoch interop is always guaranteed, basically allowing a newer crate crate to use the older crate's functionality. For macros I've heard assurances that there will be "epoch hygiene" or something.
But how will this apply to #[macro_use]
? I see the following options:
extern crate foo
if there is a #[macro_use]
next to it. I don't think this is a good idea however, as it would be the legacy syntax that one wants to get rid of, and it would look weird to have a bunch of #[macro_use] extern crate foo;
statements inside your lib.rs#[macro_use] use cratename::module;
or macro_use! {cratename::macro_name}
.Regarding points 2 and 3, @withoutboats has expressed that there is a desire to make idiomatic code of the future epoch legal in the current epoch, so we need to do those backards compatibly inside the current epoch.
cc @nrc @aturon
@est31
- Implement the macro naming and modularisation system for macros 1.0 as well.
This is already implemented across crate boundaries. #[macro_export]
macros from an upstream crate appear in the upstream crate's root. For example,
// crate foo
#[macro_export]
macro_rules! macro_from_foo { () => {} }
// crate bar
#![feature(use_extern_macros)]
// ^ implied by `#![feature(proc_macro)]`, `#![feature(decl_macro)]`
extern crate foo; // no #[macro_use]
use foo::macro_from_foo; // `pub use` also works, subsumes `#[macro_reexport]`
macro_from_foo!();
In other words, generally speaking you can replace
#[macro_use]
extern crate foo;
with
#![feature(use_extern_macros)]
extern crate foo;
use foo::{macro_rules_macro_1, macro_rules_macro_2, ...};
We talked about this somewhere before @jseyfried, but I'll remind us again that macro reexporting is stable when doing it with a glob use
. Crate frunk
already relies on it.
I don't know if it is the right place to mention this, but here's what IÂ would like to achieve:
I'd like to have the possibility to put macro definitions in an impl
to be able to do Struct::macro!()
.
My use case (in my crate tql) for that is to generate macro in a custom derive (i.e. SqlTable
) and use these generated macros from another macro (sql!()
).
Without this feature, that would require the users to use
macros that are not in code they wrote.
Is it planned to add the ability to declare macros in an impl
block or is there any alternative for this use case?
Thank you.
@antoyo
I'd like to have the possibility to put macro definitions in an impl to be able to do
Struct::macro!()
.
Implementing this would require dramatically restructuring the compiler, and isn't planned in the foreseeable future. Also, I believe it is a unresolved question if we can do sound type checking while macros are still being expanded (e.g. issues with coherence checks).
My use case (in my crate tql) for that is to generate macro in a custom derive (i.e.
SqlTable
) and use these generated macros from another macro (sql!()
).
This is why we have hygiene :)
Without this feature, that would require the users to
use
macros that are not in code they wrote.
The point of hygiene is that names at the macro definition resolve at the macro definition, independently of where the macro is invoked. Similarly, names passed in to the macro as arguments resolve at the call site, e.g. they can't be accidentally shadowed by a name at the macro definition. For example,
#![feature(decl_macro)]
mod foo {
pub fn f() {} // (1)
pub macro m($arg:expr) {
f(); // This resolves to (1)
mod bar {
fn f() { $arg }
}
}
}
fn main() {
fn f() {} // (2) (note -- no conflict error with (1))
foo::m!(f()); // The `f` argument resolves to (2) even though $arg is in a weird place
}
Roughly speaking, hygiene causes a macro m($arg:expr) { ... }
to resolve like a corresponding fn m(arg: T) { ... }
would.
This also applies to procedural macros. For example,
#[proc_macro_derive(Foo)]
fn foo(_input: TokenStream) -> TokenStream {
quote! {
// Due to hygiene, these names never cause conflict errors.
extern crate sql_macros;
use sql_macros::sql;
... sql!(Struct) ...
}
}
Annoyingly, we can't declare the extern crate sql_macros; use sql_macros::sql;
in the proc-macro crate, since items in the proc-macro crate are compiled for the host platform. The target platform can only see the resulting procedural macros, not the underlying functions (vice versa for the host).
Once the Cargo.toml
for procedural macros crates supports declaring target dependencies in addition to today's host dependencies, the target dependencies will automatically be in scope inside quote!
.
This was discussed in the recent work week with an eye towards how we might be able to stabilize slices of macros 2.0 in the Rust 2018 edition release. When we discussed this in a group it was concluded that as far as we knew this feature was working as intended.
While there are known "oddities" with respect to how macro_rules!
works locally within a crate it was decided that this is an important enough feature that it should be ok to stabilize with such behavior.
@rust-lang/lang, could I convince one of y'all to enter into FCP on this issue? In terms of timeline I'd like to ensure that a chunk of macros 2.0 (not all of it) stabilizes all at once, so if this passes through FCP and the other pieces fall through then I think we won't stabilize this issue, but if other pieces come through as well I think we'll stabilize this.
@rfcbot fcp merge
Let me clarify first what feature we're stabilizing, as I understand it:
extern crate serde_derive as serde;
#[derive(serde::Serialize)]
struct Foo { }
extern crate serde_derive;
use serde_derive::Serialize;
#[derive(Serialize)]
struct Foo { }
#[macro_export]
attribute can be imported using normal imports. I do not know if they are at the location they are declared at or in the root of the crate (someone clarify).extern crate foo;
use foo::bar;
bar! { ... }
As a result of this, the #[macro_use]
system can become a legacy syntax, replaced by using normal path imports for macros from other crates. This makes macros from other crates act more like any other item, and it is useful for making extern crate
a legacy syntax as well.
Team member @withoutboats has proposed to merge this. The next step is review by the rest of the tagged teams:
No concerns currently listed.
Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!
See this document for info about what commands tagged team members can give me.
@withoutboats indeed! That is my understanding as to what we're stabilizing as well :)
@withoutboats
macro_rules! macros from other crates with the #[macro_export] attribute can be imported using normal imports. I do not know if they are at the location they are declared at or in the root of the crate (someone clarify).
They show up at top level of the crate:
// crate foo
extern crate bar;
use bar::baz; // works
use bar::bazmod::baz; // errors
fn main() { baz!() }
// crate bar
pub mod bazmod {
#[macro_export]
macro_rules! baz { () => { println!("hello world!"); } }
}
Sounds good, that's what I expected since its where they show up in docs.
I tried this feature now, generally it's very nice with simple macros, but it seems to be unusable with macros that expand to another macros, because one needs to manually import also those, even thought they can be considered as implementation details. So macros don't at the moment, "lexically close" references to another macros, right? I.e. they expand unhygienically. Is this an expected thing?
Do macros that use the new use syntax "close over" the macros they expand into?
Yes, macro_rules!
is known to be unhygienic. Fixing this is one of the goals of declarative macros 2.0 https://github.com/rust-lang/rust/issues/39412
As extern crate will be removed from the language in the 2018 epoch (I think it will??) we need a replacement for #[macro_use]
. Stabilizing this is one way to do this. :+1:
So is the following true: macro_rules!
expand always without hygiene, so it would be possible to have foo! { "yahoo" }
that expands into bar! { "yahoo" }
, where bar
is not imported, or depended by the f
crate at all, so that what the bar!
expands into is decided at the call site, depending on what the caller has imported?
I was thinking if it would have been possible to backward-compatibly enable a limited version of "macro import hygiene" in the cases where the new use
syntax is used. However, if the expansion is done completely in the context of the caller, it seems hard, because the macro authors may have written their macros with the expectation that their macro expands to some another macro that is not even in scope at the definition site, so it can't be "closed over".
However, in the vast majority of the cases, macros that are implementation details are defined in the same crate as the user-facing macros, so they are in scope at definition site AND at call site. The use
syntax causes them no longer be in scope at call site, which is the whole point of the feature, but it would improve ergonomics if there were a hygienic "closed over reference" fallback for the old-style macros imported in the new way in case the macros they expand into are not in scope at call site. That would enable people to actually use the new import syntax.
@golddranks unhygienic expansion is a feature of macros 1.0 at this point, and so this feature, for macros 1.0, would basically be inheriting that
Thanks; I see. So, likewise, the expansion failing in absence of the macros in the callsite scope can considered a feature, as odd as it sounds?
Indeed yes, it's not a feature we'd choose to have but it was a tradeoff for 1.0 stabilization
:bell: This is now entering its final comment period, as per the review above. :bell:
How do macro!
and macro_rules!
interact? I fear that since there's already macro_rules!
everywhere, this won't get anywhere unless either:
(P.S. Where is the rfcbot icon from? That looks really cool!)
We’re not doing Rust 2.0, macro_rules!
is here to stay. It’s ok if a lot of existing code doesn’t migrate. The point is providing a better alternative for new code (or code the has someone interested in refactoring/rewriting it).
@golddranks' issue sounds to me like a good case for not being able to use
macro_rules!
macros, as it puts them in an entirely new context that the author could not have anticipated. (but the new proc_macro
s are okay, since they have to be referred to by path, and the author can clearly see from day one that their generated invocations of other macros defined in the same crate do not work without adjustment)
Edit: Turns out #[macro_use(foo)]
is a thing, and therefore this problem is already faced by macro_rules
macros today.
So, how macro_export
works.
If we have two crates - library
and main
, then
library
crate is traversed in unspecified order and all legacy macro items (macro_rules
) marked with macro_export
and macros reexported with macro_reexport
are collected into a vector (so the vector can contain duplicated names). The order is generally determined by depth-first search, but I suspect things can become more complex with macro expansion.library
's root module from main
's point of view.What I think we should do before stabilizing use
of legacy macros from other crates:
macro_reexport
, it's subsumed by use
since Nov 2016 (https://github.com/rust-lang/rust/pull/37732). PR is submitted: https://github.com/rust-lang/rust/pull/49982.macro_export
ing two macros with the same name from a crate (code doing this must be a mistake because one of the exports is lost in process). Duplicates macro
/use
vs #[macro_export] macro_rules
are already prohibited.Stabilization of use_extern_macros
will also collaterally stabilize paths with >1 segment in attributes (#[a::b::c]
).
Resolution for such paths is total mess and the rules may change with https://github.com/rust-lang/rust/issues/44690, so I'd prefer to keep them unstable for now (the workaround is to use use a::b::c; #[c]
).
Thanks for taking a look @petrochenkov! All of removing macro_reexport
, probhibiting duplicate macro_export
, and only allowing one-segment paths in attributes sounds great to me.
Sounds good to me too. Hopefully we can get multi-segment paths in attributes working soon, but I don't think we should either block this on that or stabilize it before its ready.
The final comment period is now complete.
Ok great! I'm going to hold off on any actual stabilization here until the rest of Macros 1.2 is finished in FCP for stable.
Can #[doc(hidden)]
be made to work for macros reexported as in https://github.com/rust-lang/rust/pull/37732 ?
or is this a limitation of current macros and rustdoc?
@spearman
That's a bug, I've seen it while removing macro_reexport
but haven't reported until now - https://github.com/rust-lang/rust/issues/50647.
Import regression with use_extern_macros
: https://github.com/rust-lang/rust/issues/50725.
Needs to be fixed before it's enabled by default.
Will #[macro_use]
eventually be phased out? Trying to figure out a sane way to re-export macros that are required for using a public macro. I suppose the idea is to eventually use paths like $crate::path::to::reexported_macro!
so that a user never has to explicitly import any reexported macros, but if those macros don't themselves use $crate
paths in macro invocations, then futher macros need to be imported at the root level, which is where #[macro_use]
comes in handy. The alternative would be to put the macro definition together with any reexports in to a single module so it can be glob imported. Again if I understand correctly, if all macros correctly use $crate
paths to refer to reexports then this won't be necessary and importing a macro by itself is sufficient to use it.
Perhaps it is a good idea to make such non-$crate imports within macro definitions to be illegal?
In other words: when using use
in a macro definition, it would always start with the crate name as a path root.
That way the problem is avoided in the first place. And it's not a theoretical problem either: I've run into issues like this where it's only after trying to use the crate that I discovered that the macro was exported but unusable without extra imports.
@spearman #[macro_use]
on cross crate macro usages is being phased out in favor of use crate_name::macro
with this issue, although within a crate you'll still use #[macro_use]
@jjpe perhaps yeah! Although something like that will likely require an RFC
PSA: if you, like me, are wondering why use
will only work for extern macros and not crate-local macros, see https://github.com/rust-lang/rust/issues/35896#issuecomment-299575710.
Seems like I should have raised this last month but sorry I just found this thread: I would be strongly opposed to stabilizing use
imports of extern macro_rules macros until we have a solution for private helper macros.
extern crate log;
use log::warn;
fn main() {
// error: cannot find macro `log!` in this scope
warn!("Warning!");
}
extern crate serde_json;
use serde_json::json;
fn main() {
// error: cannot find macro `json_internal!` in this scope
let j = json!({ "k": "v" });
}
Our whole system mostly works for now because of the way #[macro_use]
brings in all the macros. Moving toward importing macros individually like items will make everything seem more broken and fragile and will be immensely frustrating for macro users and macro authors compared to the current way.
An additional constraint is that ideally we would solve this in a way that a crate could support macro_use
and use
at the same time. For example something like telling log::warn!
to invoke $crate::log!
would not work because that leaves them no way to support old compilers as well as the new use log::warn
.
I just found the thread so here is an approach with barely more than zero thought behind it, but just to illustrate one possibility that fits my criteria:
#[macro_export]
#[bikeshed_also_export(__log)]
macro_rules! log {...} // forward to __log
// `use log::warn` brings both `warn!` and `__log!` in scope.
#[macro_export]
#[bikeshed_also_export(__log)]
macro_rules! warn {...} // call __log
#[doc(hidden)]
#[macro_export]
macro_rules! __log {...}
Here as long as bikeshed_also_export
is somehow cfg'd away or ignored by old compilers then the same code works equally well both for #[macro_use] extern crate log
on old compilers and use log::warn
on new compilers.
Tagging @rust-lang/libs because this stabilization would seem to put macro libraries in a tough situation.
Here as long as bikeshed_also_export is somehow cfg'd away or ignored by old compilers then the same code works equally well both for #[macro_use] extern crate log on old compilers and use log::warn on new compilers.
Its not though, is it? Unknown attributes are a hard error. You could just as easily cfg the whole macro definition based on whether you intend to be imported with #[macro_use]
or not as you could cfg the attribute you're suggesting, and $crate::log!
is much cleaner.
You could just as easily cfg the whole macro definition
I would not say that keeping two parallel macro definitions (one that calls private_helper!
and one that calls $crate::private_helper!
) is just as easy as a cfg_attr
on one line.
Unknown attributes are a hard error.
I am not concerned about this because it seems easy to find a way around. Just for example the following compiles back to rustc 1.0.0:
#[macro_export(also_export(private_helper))]
macro_rules! ...
@dtolnay I share your concern about the new use
imports breaking stuff. What do you think about this proposal: https://internals.rust-lang.org/t/help-stabilize-a-subset-of-macros-2-0/7252/18 ?
@dtolnay
I would be strongly opposed to stabilizing use imports of extern macro_rules macros until we have a solution for private helper macros.
I don'see how permitting use
for macros is blocked by the private helpers problem.
We are not forcing use
and not deprecating #[macro_use]
or anything, it still can be used when needed.
(Yes, some adventurous people work on breaking lints like unnecessary_extern_crates
, but thankfully they are allow by default.)
At the same time use_extern_macros
is a prerequisite for stabilizing proc_macro
s/proc_macro_attribute
s that can't be imported with #[macro_use]
.
#[macro_use]
(at least on extern crate
) works like a glob in use
, but glob targeting only one namespace - macro namespace.
Perhaps more general feature - allowing use
items to import only in selected namespace will help here as well.
// The syntax is exposition only
use a::b in value; // imports `fn` b, but not `type` b
use a::b::* in macro; // imports all macros from `a::b`
The private helper problem is rather a blocker for recommending use
over #[macro_use]
in documentation.
@petrochenkov
I don'see how permitting
use
for macros is blocked by the private helpers problem.
Without a provision for helper macros, I would actively discourage users from using use
for macro_rules macros: PSA do not use this feature, do not get in the habit of using this feature, you will be confused, you will be sad, your code will break, authors of the macros you use will be sad, stay away, etc. The fact that I would discourage people from using this feature so strongly is the blocker.
On top of that I would make no effort to support use
import of macros that I write because it would be an unreasonable maintenance burden. This is just a continuation of how I already make no attempt to support macro_use(...)
.
I hear you that we are not forcing use
and not deprecating macro_use
but if we are going to be saying do it the old way, please please never use the new way then we should not stabilize the new way until the recommendation is different.
@petrochenkov
At the same time
use_extern_macros
is a prerequisite for stabilizingproc_macro
s/proc_macro_attribute
s that can't be imported with#[macro_use]
.
My objection:
I would be strongly opposed to stabilizing use imports of extern macro_rules macros until we have a solution for private helper macros.
I am concerned only about use
of macro_rules macros. Proc macros can generally be factored in a way that does not require further expansion of helper macros.
The private helper problem is rather a blocker for recommending
use
over#[macro_use]
in documentation.
Is this saying we should stabilize use
but hope nobody finds out about it...?
@golddranks if that can be implemented, and if we can isolate the behavior to macro_rules macros, then that would be terrific and solves the problem. If I understand correctly, your idea is that just before a "cannot find macro `m!` in this scope"
error, and only if the token m
originated within the definition of a macro_rules macro, then it should make a last ditch effort to resolve m!
within the crate that originated the m
token. :+1:
@dtolnay
I am concerned only about use of macro_rules macros.
I think it's possible to stabilize/enable the general macro importing mechanism while keeping imports of macro_rules
macros sort of "not working" without a feature gate.
It's not entirely trivial though, e.g. you can't just gate an import if it points to macro_rules
, then e.g. use std::panic;
would stop working.
#[bikeshed_also_export(__log)]
What do you think about this proposal: https://internals.rust-lang.org/t/help-stabilize-a-subset-of-macros-2-0/7252/18 ?
To clarify my priorities, I'd like to avoid:
use my_crate::__log;
on use my_crate::log;
makes import resolution more complex.use a::b;
can import only items named b
, if b
is a macro with "linked" helpers.macro_rules
falls back to def-site hygiene.macro_use
on crates behaves sorta kinda like an additional prelude, affects only "relative" names (scope-based resolution), i.e. X
, but not a::X
or ::X
, so it doesn't participate neither in import resolution nor in hygiene.)So, my recommendation would be to:
use_extern_macros
, solve the private helper problem with $crate::__log
, recommend use
over macro_use
.#[macro_use]
is the only way, on the library author side cfg
s are somehow employed to generate both __log
and $crate::__log
macro paths depending on the compiler (sorry :( ). Or use
is discouraged for a few compiler versions, then the switch from __log
to $crate::__log
happens. In my opinion your recommendation does not meet the bar for the cross-edition interoperability story that people envisioned. To make a library that works equally well on 2015 and 2018 (as Serde and many other libraries would want to do for some reasonable transition period) we would be telling people:
version_check
]. The dependency adds 0.7 seconds to compile time.build = "build.rs"
. The build script adds 0.6 seconds to compile time.version_check
to determine whether the compiler version is sufficiently new to use $crate::private_helper!
. The third-party version_check
crate implements this by using the std::process
API to shell out to rustc --version
, then parsing the numbers out of a version string that looks like rustc 1.27.0-beta.5 (84b5a46f8 2018-05-15)
."cargo:rustc-cfg=crate-macros"
.$crate::private_helper!
and tag with #[cfg(crate_macros)]
.From the library authors' perspective this seems like a tough sell. For comparison the other approaches in this thread, as tricky as they would be to implement, look like this. Some sort of bikeshed_also_export
way:
You make a one-line change
- #[macro_export]
+ #[macro_export(...)]
and your macro works flawlessly through #[macro_use]
on all Rust compilers back to 1.0.0 and flawlessly through use
on all sufficiently new compilers.
And @golddranks' hygiene way:
#[macro_use]
on all Rust compilers back to 1.0.0 and flawlessly through use
on all sufficiently new compilers.)To make a library that works equally well on 2015 and 2018 (as Serde and many other libraries would want to do for some reasonable transition period) we would be telling people:
AFAIK this is a stabilization in both editions; we're talking about people who don't upgrade to the compiler version 1.29.0, not people who don't upgrade to the 2018 language edition. While libraries often support people on old compiler editions, it is a very different thing to require people to upgrade their compiler than to require that they update their code.
The way I mean this is: for some time Serde will want to work equally well on rustc <1.29 (which only supports 2015-style macro_use
) and rustc >=1.29 (on which people expect to be able to use 2018-style nice new features like use
). I agree that upgrading your compiler and changing your code are different things but telling a library to follow unappealing steps 1/2/3/4/5/6 or else release a breaking change for no other reason than we couldn't figure out macro imports -- seems not great.
@dtolnay its important to be clear here: with the $crate::
solution, you can continue to use #[macro_use]
as long as your compiler is a recent enough stable (1.29 or whatever). What you call the "2015-style" system will still work, because macros can be compatible with both "styles." When you say "a breaking change" I am surprised - I did not know that serde adopted the position that increasing the minimum Rust version it required was a breaking change. Is that the case?
The minimum required compiler version from Serde 1.0.0 through today has always been rustc 1.13. So far we have had no trouble catering to users of new compilers using only 1.13's feature set. This would be the first time in 15 rustc releases that it becomes complicated to support users of new compilers -- which I guess is driving my concern here.
I am nominating for the libs team to discuss how we imagine the library situation playing out over the rest of the year. Around 1.29 or whatever are we expecting most libraries to drop support for compilers older than 1.29, whether through a massive round of breaking changes or by patch versions that aggressively push people to upgrade compilers?
@rust-lang/core may be interested as well: this affects peoples' perception of the stability of the language which may already be a sensitive topic around publicity of the edition.
@dtolnay
If something like https://github.com/rust-lang-nursery/api-guidelines/issues/123#issuecomment-390456962 works, it would be great.
But I wouldn't personally mind a documentation-only solution "my macro library doesn't support importing with non-glob use
and macro_use(named)
" + possibly internal future-proofing with dummy internal helpers, until minimal compiler version is bumped to 1.28-1.29 naturally in the next year or two.
Now, let's assume that the combination "supporting use
+ simultaneously supporting older compiler versions + using same code for both" is absolutely critical and we need a language solution for it, then:
macro_export(a, b, c)
because it's accidentally accepted by older compiler versions or no new syntax at all, so we can't solve the problem by some new more generally useful mechanism, like hygiene opt-in for macro_rules
(the opposite of https://github.com/rust-lang/rust/pull/47992).I think the most simple and local solution would be:
my_helper!(...)
for which name resolution fails as determined.my_helper
identifier, from which we can figure out definition of the macro in which it was originally written, in particular crate and kind of that macro.macro_rules
, we either 1) try to resolve my_helper
as $crate::my_helper
with that crate, i.e. automatically doing the library author's job from my previous recommendation (simpler) or 2) try to resolve my_helper
at def-site of that macro (may be more complex for macro_rules
(as opposed to macro
), I'm not sure the necessary infrastructure is in place).macro_export(a, b, c)
. Increases complexity, not strictly necessary.ping @jseyfried who implemented a couple of similar hacks to support legacy behavior of macro_rules
(in case he's still reading messages from github).
Thanks for bringing these points up @dtolnay, always good to know about them regardless of where we are on the stabilization timeline!
To make sure I understand the issue, the point you're bringing up @dtolnay is basically that this code doesn't work today?
#![feature(use_extern_macros)]
extern crate log;
use log::info;
fn main() {
info!("test");
}
If that's the case, that does indeed seem worrisome! I'm not sure, though, that it necessarily implies we should delay this or add more features to macros 1.0. For example we know that all code will continue compiling as-is (as it uses #[macro_use]
). The only question is how we actually signal this transition.
So far we've been saying that you should replace #[macro_use]
use use krate::macro_name;
, but what if we instead suggested the true replacement for #[macro_use]
, namely use krate::*;
? That's actually (modulo namespaces) what literally #[macro_use]
is doing today (whether it looks like that or not). While I agree that a glob import is indeed unsightly it's also why we're developing a new macro system!
This to me seems like it leaves us with two downsides:
#[macro_use]
but use log::*;
isn't great either.log::info!()
though?)log!()
is replaced with $crate::log!()
It seems to me that the #[macro_export]
solution you're thinking about still has the problem of "all current macro authors must go back and maybe edit their code", right? The main difference, I believe, is that the backwards-compatible solution is much nicer in that you can just list dependent macros.
In other words the delta over where we are today (if we stabilize), is that macro authors who both go back and take a look at their macro-exporting crates while also considering backwards-compatibility don't have to duplicate their definitions. I think though the cost of stabilizing this is still the same?
To me that seems like a wortwhile tradeoff to make. The primary use case for this features is macros 1.2 which is targeted to be stable at the same time as this feature. It's sort of secondary that we expect macro_rules!
to transition to this as well, but I think it's totally fine to basically just delay our messaging here. I think we can "fix" this in a backwards-compatible way in the sense of making authoring macros a bit nicer, but I don't think we should halt or delay the stabilization of 1.2 because macro_rules!
isn't so great (as it can't possibly be worse than today which we're already "happy with")
@dtolnay and I had a chat about this on IRC, and I'll try to summarize here.
Let's say we stabilize this feature in 1.29. There exist popular crates which will maintain compatibility with pre-1.29 compilers, for example log
and bitflags
. We cannot as-is reasonably transition users of these two crates to using the module system instead of #[macro_use]
. One of two possible options seems like a way to avoid this pain:
macro_rules
macros. This means that you'd use this feature to import procedural macros, custom attributes, etc. You would not, however, use it to import macro_rules
macros. This means that #[macro_use] extern crate foo;
is still a thing, an obvious downside.log
and bitflags
crates to get implemented in a "reasonable fashion". This would be along the lines of solutions like proposed by @dtolnay and @petrochenkov. Alternatively we could stabilize this feature and simply not announce it. Instead we could wait until a sufficient mass of "popular crates" transition to requiring 1.29 or future compilers, in which case everything is right as rain and we can sound the trumpets at that point.
@alexcrichton So the proposal for "last resort defsite hygiene" isn't considered feasible here? @petrochenkov already that he considers it making hygiene too complex (https://github.com/rust-lang/rust/issues/35896#issuecomment-390446792) but I'd like to hear your opinion too.
@golddranks
So the proposal for "last resort defsite hygiene" isn't considered feasible here?
That's item 2.
from @alexcrichton's list.
"Last resort to $crate::my_helper
" (https://github.com/rust-lang/rust/issues/35896#issuecomment-390477706) should be simpler than "last resort to defsite hygiene" due to subtle differences like this:
// Def-site resolution for `public_macro` would refer to this private non-exported macro.
// This kind of legacy interaction is not currently supported even for `macro` items.
macro_rules! my_helper { ... }
#[macro_export]
macro_rules! public_macro { ... my_helper!() ... }
// Macros with same name can shadow each other in a module.
// `$crate::my_helper` refers to this exported macro.
#[macro_export]
macro_rules! my_helper { ... }
That said, I'd still prefer not doing this.
Ah, I see. Pardon my confusion. Indeed, that is simpler.
Should this currently be working for macros exported by std
, testing the below code gives error[E0432]: unresolved import `std::assert`
:
use std::assert as std_assert;
std_assert!(2 == 3);
@Nemo157
https://github.com/rust-lang/rust/pull/48813 made assert
built into the language so it's no longer defined by libstd.
@petrochenkov interesting, follow up question then.
Should built-in macros somehow act as if they were exported from libstd/libcore for this feature?
It seems that stabilizing this feature would make it impossible to transition any other macros from being real macros into compiler builtins, otherwise any uses like above would stop compiling.
One way might be to keep the macros as real macros that expand into the compiler builtins, something like macro_rules! assert { ($($t:tt)*) => { __builtin_assert!($($t)*) } }
. That way the macros themselves would still be able to participate in the normal naming/modularisation scheme.
macro_rules! assert { ($($t:tt)*) => { __builtin_assert!($($t)*) } }
Yes, that's probably something we should do when moving a macro from the library to the language.
Is it correct (forgive me if I am repeating) that if we supported $crate::bar!()
as a way to invoke macros, then you could write macros that use private helpers without requiring users to manually import them? (Users would of course have to be using a new enough compiler to support that.)
If so, I definitely feel that just supporting that syntax is a viable solution. If a crate X wants to retain compatibility with older compilers, it means that its consumers just have to use #[macro_use] extern crate the_crate
. Suboptimal but not a total disaster.
(I think we should make some effort at promoting community wide "compatibility ranges" when it comes to rustc versions, as well, but that is perhaps a discussion best had elsewhere.)
That said, @nrc and I were talking on discord and I had an idea that I kind of like which I wanted to write down. It's a variation on a proposal that @nrc raised.
TL;DR: We try to make the ability to impor a macro via use
a new feature which some macro-defining crates will have opted into and others will not have yet done. We do not attempt to "retrofit" existing crates into this model without some opt-in.
The first part is to make it "opt in" to have macros usable via use
— existing macros that are not changed would still require #[macro_use] extern crate the_crate;
to be used. This means then that crates can choose when to enable this feature based on whether they use helper macros and whether they are ok with requiring that their consumers have a newer compiler (e.g., one that can support the new edition).
Now, how does that opt in look? I'm actually going to spin two variants of this proposal, one more aggressive and one less aggressive, because I don't quite know how hard each one would be to do.
The more aggressive variant
We allow macro_rules!
macros to be declared pub. This would be the only macro that is allowed to be declared as pub
. Ideally, you would be able to declare it as pub
anywhere. This would also mean the macro does not use the default macro_rules!
mechanism but instead opts in to the more lexical mechanism -- also within the crate. In other words, it works just like pub macro
was meant to work.
(Maybe, in the new edition, we can also make macro_rules! foo
without pub work using the lexically scoped rules? We might not be able to write a migration lint for that, though, but the only real problem is in the case of shadowed macros, I guess, and that seems like a corner case that we could detect and require manual intervention? Maybe?)
Now, for compatibility, we can still permit #[macro_export]
on public macros. This means that a crate can do something like this:
pub mod macros {
#[macro_export]
pub macro_rules! my_macro { ... }
}
and now folks with a new compiler can do use the_crate::macros::foo
or use the_crate::macros::*
, but folks with an older compiler can still do #[macro_use] extern crate the_crate;
.
This does not solve hygiene. If you have private "hidden" macros like _foo!
, they are "observable" to your users, who must either import them manually or use a glob import. I think though that having the option to do use the_crate::macros::*
is less .. unsightly than requiring use the_crate::*
.
As a bonus, macro-rules within a crate work like all other items, just like we always wanted. Huzzah. (Naturally #[macro_use]
on modules would have to be .. deprecated? The interactions here may just be too complex.)
The less aggressive variant
Instead of pub macro_rules!
, we could do #[macro_export(pub_use)]
to signal that you want this macro to be imported via use
and not via #[macro_use]
. This would be a breaking change. Perhaps there is some other way to say you want both: #[macro_export(macro_use, pub_use)]
. Intra-crate, nothing changes (as today).
Observations:
These proposals basically add a new feature: macro imports using use
. We consider this the "recommended" way to use macros, but the old way continues. Naturally this means one can support older compilers just by not adopting the new feature (or by adopting both).
There are some problems that are not "fully solved" in some sense -- in particular, if you want to have private helper macros and retain compatibility with older compiler versions, you still have to force your consumers to use #[macro_use] extern crate the_crate;
. But that feels ok: it often happens that crates stick to old idioms for some time, and this is just an instance of that (as stated in the TL;DR above).
@nikomatsakis
Is it correct (forgive me if I am repeating) that if we supported $crate::bar!() as a way to invoke macros, then you could write macros that use private helpers without requiring users to manually import them?
$crate::bar!()
is already supported with #![feature(use_extern_macros)]
, you just can't use macros expanding into it in the same crate, but that shouldn't be a big problem for libraries that produce such macros rather than consume them.
EDIT: ... and I'm pretty sure that we can provide support for $crate::my_helper!(...)
in the same crate with exactly same semantics as $crate::my_helper!(...)
from other crates without any other changes, if its lack causes too much pain to library authors.
I'm not quite sure about the motivation for proposal in https://github.com/rust-lang/rust/issues/35896#issuecomment-391856772.
It adds some features, edition-breaks macro_rules!
, but doesn't actually change the situation with @dtolnay's concern - #[macro_use]
still has to be used for some time and use
still can't be used everywhere for some time - this is already true without any changes.
Ok I had a bit more discussion with @nikomatsakis on discord about his previous proposal and some things I wanted to write down...
The crux of the problem here is pre-1.29 compatibility. AFAIK all features of use_extern_macros
work great, and the problem only arises when a crate wants to support using its macros via use krate::foo
and also work with #[macro_use] extern crate krate;
for pre-1.29 compilers.
Now this aspect of crates is actually more far reaching than just pre-1.29 compatibility. For example the whole feature here is using macros from other crates, ones that you're possibly not writing yourself. In that situation the upstream crate may have a different development policy than you, supporting different versions of the compiler. This can run the risk of having a network effect where popular crates like bitflags
and log
may be laggard in enabling usage of their macros via the module system and specific paths (use log::info;
vs use log::*;
).
The first question then is is it a goal of this issue that we want to support this pre-1.29 compatibility use case. It's not an easy one to achieve, but the cons of not supporting this are:
$crate::foo!()
but use compiler version detection at build time to select one of two definitions of the macro. This duplication can also be a maintenance burden. All in all, this con is that it's likely pretty few crates which want pre-1.29 compatibility are likely to be usable with macros and the module system.log
, bitflags
, lazy_static
, etc) do not support the module system then this feature may feel "incomplete". For example you'll have to remember which crates to use the module system with and which crates to use #[macro_use]
with, and we'll risk being in a weird transition period for a few cycles (maybe longer?)And I think I'm forgetting the last one! In any case though it's also worth pointing out that it's not clear what the impact is here. For example we don't know which popular macros don't work with the module system basically as-is or are easily modifiable. Additionally it's always possible to require use log::*;
which will work very close to what #[macro_use]
does today.
So ok, let's say we do want to consider pre-1.29 compatibility an option. So far it sounds like there's two plausible (if not-so-fun-to-implement) solutions:
#[macro_export ( ... )]
to add annotations necessary to get use
of the macro workingIf we decide to go one of these two routes then we'll want to avoid stabilizing modules and the macro_rules!
system for 1.29, but we'll probably want to continue to stabilize procedural macros and attributes.
And finally, one last thing worth mentioning. No matter what we do it's likely that when we stabilize macros and the module system it won't be usable with the large majority of macros already in existence in the ecosystem. In other words most macro definitions will need source level changes (in one way or another) to work with the new system (like using $crate::__helper!()
). In that sense we may want to consider some possible tweaks (like @petrochenkov's idea) to reduce the impact here and make more macros usable-by-default
Just to be clear: is the idea to fully deprecate macro_use
attributes with crate-local macros too, eventually? Having two different syntaxes depending on where the macro is defined seems like a really bad state to be in for any appreciable time. Certainly not something I would think should be stabilised.
@alexreg eventually macro
-macros (macros 2.0) is the way of the future which fully integrates with the module system, there's currently no interim plans to fully deprecate #[macro_use]
with macro_rules!
for within a crate
@alexcrichton Fair enough; thanks for clarifying.
I made a rough implementation for the "macro_rules!
helper fallback" as described in - https://github.com/rust-lang/rust/issues/35896#issuecomment-390477706.
As it turns out, it's not "just a fallback" it's "fallback in the middle of fixed-point resolution" again.
To truly rely on the fallback we need to expand everything before doing it, but expansion cannot progress without performing the fallback.
The workaround is similar to other situations with fallback in resolution - optimistically perform the fallback even if the resolution is undetermined (i.e. there are macros that can potentially expand to #[macro_use]
), then detect "time travel" post-factum and report an error if new #[macro_use]
imports appear that would be preferred to the fallback.
I think the helper_fallbacks.contains(name)
check on global_macros.insert(name)
should catch all the cases of "time travel", but I'm not 100% sure.
EDIT: Unfortunately it doesn't, see the second commit.
It's certainly a trade-off, but now my feeling that supporting this would harm the language long-term in favor of short-term version-migration benefits is stronger than before.
Status update: https://github.com/rust-lang/rust/pull/51145 addresses the last known regression from enabling use_extern_macros
.
I propose stabilizing use_extern_macros
without providing a language solution (like https://github.com/rust-lang/rust/issues/35896#issuecomment-392393413) for the macro helper problem discussed above.
I think we should do it now, so $crate::my_macro
becomes available on stable as soon as possible.
We could also do the backport of the stabilization PR (https://github.com/rust-lang/rust/pull/50911) and bugfix PRs (https://github.com/rust-lang/rust/pull/50355, https://github.com/rust-lang/rust/pull/50760, https://github.com/rust-lang/rust/pull/50908, https://github.com/rust-lang/rust/pull/51145) to beta, then it will be available on stable starting with 1.27.
cc @rust-lang/lang on this last comment; nominating for meeting as well.
Here is a hopefully clearer and less shouty writeup of my perspective in anticipation of the lang team discussion.
Enabling use
of individual macro_rules
macros from another crate.
use log::warn;
We are all used to new language features and have seen some fantastic ones recently. From the point of view of a particular library, new features generally break down into one or multiple of:
Features that make it possible to solve some problem that was impossible to solve before in Rust. For example union
allows a sys crate to expose a signature ABI-compatible with some C function that passes unions. Before Rust 1.19 it just couldn't have provided a binding for such functions. This is great and expands the world of problems that Rust is suitable for.
Features that prompt users to redesign the API of a library that was previously designed and working. For example Rust 1.20 added associated constants, prompting a redesign and major version bump of the bitflags
crate. Bitflags existed and worked decently well before 1.20, but the new feature provided a materially better way to solve the problem bitflags intended to solve. The authors took into account the benefit of the improved API as a tradeoff against the cost of rolling out a change to the API. This is healthy and it is great when language features are adopted because of the API design improvements they make possible.
Ergonomics improvements that make Rust code easier to write, read, and maintain without much affecting API design. An example of this is default binding modes in match. These are great and a quality of life improvement for beginners as well as experienced users.
A defining characteristic of a foundational crate like Serde is that only type-2 features are relevant to us.
The library addresses a particular problem domain, so unless we are expanding the problem domain, type-1 features geared toward things that couldn't be done before are not relevant. The library does a thing, so the thing it does is not one of the things that cannot be done prior to the new feature.
We pay close attention to type-2 features. For example if some future version of generic associated types makes it possible to do what Serde does using a radically nicer API, we would redesign the API and release the improvement as a breaking change.
We don't pick up type-3 features until such time as we bump the required compiler version for type-2 reasons. This has to do with how much weight is on one side of the tradeoff that exists between benefit of ergonomic improvements to development within the Serde codebase, versus cost of pushing a compiler upgrade to users. Regardless of how big an ergonomic improvement may be, there exists some threshold of number of downstream users beyond which their upgrade friction outweighs our internal ergonomic benefit.
The thing that is unprecedented about use
imports of macro_rules
is that, while it does not break existing library APIs (a non-starter), it does break users' expectations of existing library APIs.
That is, compiler developers would categorize the change as type-2 in the sense that libraries today expose a working API that looks like #[macro_use] extern crate log
, and the new feature gives them a way to expose a nicer API that behaves more like imports elsewhere in the language, use log::warn
, if they choose. It is easy to see this as no different from a canonical type-2 new feature like associated constants. The authors of bitflags
would have the duty of deciding whether the API improvements afforded by the new feature outweigh the cost of rolling out a change to the API.
But in this aspect the point of view of compiler developers diverges from that of library developers and library consumers. Unlike compiler developers, everybody else does not perceive the API of a crate as "it exposes such-and-such macro importable through #[macro_use]". Rather, they perceive the API as "it exposes such-and-such macro" and independently, "here is how macros in Rust are imported."
The distinction is important because it deprives library authors of the choice of following their ordinary type-2 decision process. As an author of a crate that exports macros I cannot weigh the two choices and decide whether to stick with #[macro_use]
for now or jump to use
. Outside of my control, and regardless of anything I may write in documentation, users will expect to be able to use
my macros because that is how they understand macros are imported in Rust.
When they write a use
and receive the following error:
cannot find macro `__some_internal_helper!` in this scope
then depending on their personal experience the user will either blame Rust ("macro imports sometimes work and sometimes don't work, what am I doing wrong?") or blame the library ("I tried to use your library and your code does not compile"). Either way the ecosystem feels flaky and perpetually broken in a way that it doesn't today.
The whole thing is only a problem during a brief 2018 transition period right?
This is true. The transition period only lasts until the macro-exporting crates people use have bumped their minimum supported compiler version and moved to invoking helpers through $crate::helper!
syntax.
It is hard to say how long that would be because library authors have differing opinions about how to do this correctly and each library will have hard decisions to make: Do we force the ecosystem through a serde 2.0 upgrade? Do we prolong the transition period during which our API feels flaky and broken when people try to use it in the ways they expect? Do we aggressively force a compiler upgrade on users by breaking their builds? Regardless of which way you would decide, notice how all three of these result in an ecosystem that feels unstable.
It is possible that our attention to stability has over-indexed on one aspect of stability: the "lifespan" of code meaning how long before a compiler change breaks the code and it no longer compiles (which we promise is never, with some well-reasoned exceptions). This thread brings up "healthspan" as a different aspect of stability, how long before a compiler change breaks users' expectations around code by dropping it into a "transition period" that requires code changes to escape out of.
My experience in some large codebases leads me to value lifespan and healthspan as equally important. If it is expected that compiler upgrades are going to require ongoing maintenance investment in the form of periodic source-level changes to escape out of "transition periods", the value of never being actually broken by a compiler upgrade is greatly diminished.
In a large codebase we require the ability to write a library, finish it, and trust that it will age well until a type-2 redesign of its API.
Also the larger the codebase, the longer it takes to adopt compiler versions. Suppose that through perfectly legitimate inference breakage or soundness fixes we break X% of source lines of code every release (where X is a number much less than 1%). Those take increasingly long to work through. Also large codebases are increasingly likely to hit blocking perf regressions, again taking time to resolve. All of this means that for a large codebase there is value in foundational libraries supporting a generous range of old compilers.
The notion of healthspan is why I was particularly excited about the approach in @petrochenkov's prototype https://github.com/rust-lang/rust/issues/35896#issuecomment-392393413 which entirely avoids breaking developer's expectations of existing APIs. That is, we would change their expectations, but we would not break them because their new expectations of being able to use log::warn
would work seemlessly. I would love to see something like this adopted. I am grateful that you took the time to develop the implementation and I think it was an important thing to try. Thanks also to @golddranks for the idea and internals forum discussion in https://github.com/rust-lang/rust/issues/35896#issuecomment-390386189.
That said, if the compiler team believes that the solution there is not tenable then obviously we can't ship it.
Long-term I expect compiler versions understood by Cargo will tip the scales heavily in favor of rapid adoption of new language features including by foundational libraries. I know Josh has been working on this and I am very excited about progress there.
OK, we had a long discussion. I think there was general consensus that current state of affairs is indeed a cause for concern and that we would rather not stabilize the features "as implemented". We did not reach any proposal that had a clear consensus. So I'm going to present a few possible routes. You will find a DropBox paper with our notes at this link.
One of the things that we realized is that there are two features at play here:
extern crate
.extern crate
besides bringing in macros.use
today. (One question that was raised was: how hard would this be to do anyway?)For some people, one of these goals may be more important than the other, which influences the shape of a satisfactory solution.
There are various concerns to be balanced:
use log::debug
gives an error, that's suboptimal. Furthermore, we should ideally be able to guide users relatively clearly on how to import any given macro.extern crate 4eva. One point of view is that this whole conundrum is evidence that trying to "blur the line" between "macro-rules" and "macros 2.0" was a mistake. We should back off from supporting this feature at all and instead just continue to have people use #[macro_use] extern crate foo;
as they ever did. Procedural macros would still use use
. The main point here is that any attempt to bridge this gap will result in technical debt and language complexity that we can't get rid of. Obviously taking this route solves neither of the two goals.
Macro glob. We considered @petrochenkov's proposal of having some kind of macro glob form (some syntax proposals below). This could well address the "extern crate elision" goal but does not address selective import. This can be couple with pub macro_rules!
as well to address select import. Some syntax proposals:
#[macro_use] use foo;
-- presumably this would imply the old "no scope" behavior though?use foo::macro::*;
use foo::macro *;
use foo::*!;
One thing to consider here is other namespaces. For example, we might (in the future) want to support use foo::type *
or use foo::impl *
.
Public macro-rules. The idea here was to have macro-rules macros opt-in to use
import as a way to signal that this is how users should use them. This aims to address the selective import goal primarily while trying to avoid user confusion -- that is, it effectively defines a new set of macros, kind of "macros 1.5", which are macro-rules macros that can be brought in through use
. Said macros should use $crate
to invoke helpers and so forth. This allows us to give relatively clear errors: for example, trying to import an "old style" macro can result in a message like "older-style macro-rules macros can only be imported with an extern crate
" (or perhaps a macro glob, if we offered that).
recurse-within-crate. We did have one fresh idea for how we might try to "have our cake and eat it too". We were thinking that we could potentially add an annotation like so (obviously the precise name is TBD):
#[macro_export(recurse_within_crate)]
macro_rules! foo { .. }
The effect of this annotation would be that any !
that appears within the definition of foo
(to be determined by span information) is always resolved against the source crate (macros that appear in the arguments of foo
would expand as normal, of course). This is different from @petrochenkov's proposal in a subtle, but important way: it is not a fallback mechanism. Rather, we would resolve only against the source crate, as this is usually what crates need anyway (the only exception would be macros that are using higher-order macros or something).
This allows crates like log
to upgrade while still working with older versions of rustc, but new users can import crates without pain. It doesn't seem like that much technical debt to bear (my rule of thumb, at least, for name resolution is that fallback is bad, but hard choices like "always resolve from here" are fine).
I don't really have a conclusion, but it seemed like the tendency in the meeting was to want to push towards one extreme or the other:
@petrochenkov
I'd very much appreciate your feedback on this "recurse-within-crate" idea. I'll "quote" it from my previous comment here for ease of reading. =)
recurse-within-crate. We did have one fresh idea for how we might try to "have our cake and eat it too". We were thinking that we could potentially add an annotation like so (obviously the precise name is TBD):
#[macro_export(recurse_within_crate)]
macro_rules! foo { .. }
The effect of this annotation would be that any !
that appears within the definition of foo
(to be determined by span information) is always resolved against the source crate (macros that appear in the arguments of foo
would expand as normal, of course). This is different from @petrochenkov's proposal in a subtle, but important way: it is not a fallback mechanism. Rather, we would resolve only against the source crate, as this is usually what crates need anyway (the only exception would be macros that are using higher-order macros or something).
This allows crates like log
to upgrade while still working with older versions of rustc, but new users can import crates without pain. It doesn't seem like that much technical debt to bear (my rule of thumb, at least, for name resolution is that fallback is bad, but hard choices like "always resolve from here" are fine).
PS, if this idea was raised before, then I missed it, and I apologize.
I was pondering @dtolnay's breakdown of features and I wanted to relate it to the various things contained in this summary comment. @dtolnay classified features as "type 2 or 3" (I'm ignoring type 1):
- [Type 2:] Features that prompt users to redesign the API of a library that was previously designed and working.
- [Type 3:] Ergonomics improvements that make Rust code easier to write, read, and maintain without much affecting API design.
I think this is an insightful way of breaking things down, and I think it is useful to look at the proposals in those terms.
(As @dtolnay said, the current implementation of this feature doesn't really fit this breakdown. It's not a feature that crates choose to use or not to use. All exported macros are opted into it, for the convenience of their consumers, but many crates are not made to be used that way.)
The "pub macro-rules" proposal, which aims to make the new import style "opt-in", I think is an attempt to repackage this feature as a "type 2" feature: something that may be worth overhauling your crate in order to support, because users expect it (but not something you must support).
The "recurse_within_crate" proposal aims to exempt the feature from this breakdown. That is, this remains a feature that crates must use, but we provide a way for the vast majority of them to do so seamlessly (presuming they update their source if it is necessary).
It's not clear to me whether we should be concerned about crates that are never updated, which will have their macros "exposed" but which may not work — it depends how many such crates there are and whether they use helper macros. I suspect this will be a minor problem in practice. If we were truly paranoid, though, we could say that plain #[macro_extern]
means that the macro can only be used the old way, but that one can opt-in to import, e.g., by choosing between #[macro_export(recurse_at_invocation)]
(today's default behavior) or #[macro_export(recurse_at_definition)]
(the new behavior, renamed).
(Actually, I think I like this way of declaring that a macro is "use"-able better than writing pub
, since it does not suggest that it will work intracrate.)
@nmatsakis:
The effect of this annotation would be that any ! that appears within the definition of foo (to be determined by span information) is always resolved against the source crate (macros that appear in the arguments of foo would expand as normal, of course).
How would that play with 'dynamic' macro calls, like nom
uses heavily?
/// Wraps a parser in a closure
#[macro_export]
macro_rules! closure (
($ty:ty, $submac:ident!( $($args:tt)* )) => (
|i: $ty| { $submac!(i, $($args)*) }
);
($submac:ident!( $($args:tt)* )) => (
|i| { $submac!(i, $($args)*) }
);
);
It's parsed/broken up in argument position, not expanded - then expansion happens in the body, from an ident given by the caller, but with different arguments.
@eternaleye
How would that play with 'dynamic' macro calls, like nom uses heavily?
Indeed, we discussed nom as an example in the meeting. My assumption would be that nom
would not opt into that "recursive-call" feature. The question is whether nom also uses "hidden" helper macros that users aren't supposed to know about. Or -- at least -- I thought that was the important question. Thinking about it now, I think maybe it doesn't matter so much if the helpers are hidden or not. If we're going to support select import, you'd still like to be able to only import the things you directly reference.
So at the end of the day, the question is just whether the crate has both macros that invoke one another and the need invoke macros from the user's crate (likely via indirection). If so, the mechanism is going to be approximating hygiene to some degree, and that seems (to me) to be approaching a line of "too much complexity".
That argues I think against permitting selective import of macros, at least without some explicit opt-in.
That argues I think against permitting selective import of macros, at least without some explicit opt-in.
That is, to clarify: if there were some opt-in mechanism, then nom
could either elect to do nothing, or rewrite to use $crate::
and opt-in to permitting selective import.
If the "extern crate 4eva" approach is taken, that means that only macro_rules!
macros are forever bound to being imported by #[macro_use]
, correct? When we have Macros 2.0 declarative macros (i.e. the macro
keyword), that would share normal use
-based imports with Macros 2.0 procedural macros, correct? If I'm understanding all that right, then I'm in favor of sticking with extern crate/macro_use. It'd be nice to have a clean split between the old system and the new system. Shipping use
-based imports only with Macros 2.0 (in its forms) would be a nice carrot on a stick to encourage people to port their macros to the new system. And of course it avoids the confusion of having multiple ways to import macros from the old system.
@jimmycuadra
If the "extern crate 4eva" approach is taken, that means that only macro_rules! macros are forever bound to being imported by #[macro_use], correct?
Correct.
When we have Macros 2.0 declarative macros (i.e. the macro keyword), that would share normal use-based imports with Macros 2.0 procedural macros, correct?
Correct.
If I'm understanding all that right, then I'm in favor of sticking with extern crate/macro_use. It'd be nice to have a clean split between the old system and the new system.
I'm starting to lean that way myself, after having thought it over since yesterday. It's a tough call, though, but I feel like saying #[macro_use] extern crate foo;
is the (somewhat verbose) syntax for using macros from an external crate is reasonable (and analogous to #[macro_use] mod bar
being the syntax for getting macros from a module). The extern crate
form also has this side-effect that foo
becomes a member of that module, which isn't really needed anymore, but that's ok.
I see removing extern crate
from idiomatic Rust as a key goal of the 2018 namespacing changes, and I don't see introducing a new syntax which is semantically equivalent as "blurring the lines" at all (i.e. you still can't import 1.0 macros individually). I don't have strong opinions about what the replacement syntax should be except that it should be connected to use
statements.
While from the perspective of someone who already understands extern crate
, this may seem like just two ways to do the same thing, from the perspective of someone who comes into a Rust where extern crate is not normal, the current syntax will be extremely weird. It seems much more natural to tell them that you can import 1.0 style macros using use log::macro*;
or whatever that's connected syntactically to other imports.
I don't see introducing a new syntax which is semantically equivalent as "blurring the lines" at all
Introducing _new_ syntax to permanently support a feature we intend to deprecate seems less than ideal, IMHO.
I would rather just force people to write the whole extern crate
thing, even if it's a bit weird. The rationale would be that popular macros would get updated to the new system soon anyway, so the cases where someone ends up using the old syntax would be very limited.
@mark-i-m I'm talking about a scenario where there is no alternative to be updated to in the present term: we don't support individually imported macros (which depend on hygiene to work well) until we have hygienic macros, which use the whole new macro
based syntax instead of macro_rules!
. That feature is not coming in the near term.
@withoutboats I don't think anyone is under the impression that macros 2.0 are right around the corner. Nonetheless, I don't think we should introduce a new construct solely to change the import syntax for macros 1.0.
@nikomatsakis
I'd very much appreciate your feedback on this "recurse-within-crate" idea.
Looks like a good alternative.
Flipping ident!(...)
into $crate::ident!()
where ctxt($crate) = ctxt(ident)
if ident
comes from a macro marked with #[macro_export(recurse_within_crate)]
should be a local change without far-reaching implications like the fall-back scheme, and we already have all the necessary infrastructure in place for similar #[allow_internal_unstable]
/#[allow_internal_unsafe]
.
I'll try to implement this and report back.
If I'm understanding all that right, then I'm in favor of sticking with extern crate/macro_use. It'd be nice to have a clean split between the old system and the new system.
Note that if the split remains, we'd have to resurrect macro_reexport
as well and somehow stabilize it, because pub use
wouldn't work with macro_rules!
.
For this reason alone I'd really prefer to to keep macro_rules!
use
able, even if the answer to the macro helper issue turns out to be "do nothing" after all.
I've submitted a PR with implementation for the "recurse-within-crate" idea - https://github.com/rust-lang/rust/pull/51496.
Un-nominating because the discussion happened and things seems to be making progress.
Do re-nominate if another discussion is needed.
Adjusting labels per https://github.com/rust-lang/rust/issues/35896#issuecomment-382012838.
I just tried out this feature on nightly. I'm surprised that the !
is not part of the macro's name in the use
statement:
use foo::bar!;
There are others who commented as much on Rust internals:
Since macros live in a different namespace, it would be handy to be able to import them independently.
Has this been considered?
@MajorBreakfast not all macros are invoked with !
. Derives and proc macro attributes are not. It would be strange to import derives and proc macro attributes with a !
, and it would be strange to import only functionlike macros with a !
when your concern is to distinguish macros being in a different namespace than other items.
Macros are not the only items in a separate namespace, there are 4 or 5 different namespaces, and the current system allows for punning between them (for example, rustc uses both a type and a function called P
). This punning will be pretty important for derives: by re-exporting the derive in the same place you define the trait, the derive import will, once this feature is enabled, "piggyback" the trait name, rather than requiring a separate import to derive that trait.
Its interesting that several users expect quite strongly the !
to be part of the name, but it seems inconsistent with our name resolution's other practices.
@dtolnay @withoutboats Thanks for your explanations!
Its interesting that several users expect quite strongly the ! to be part of the name
@withoutboats I and, I guess, the other users as well didn't think about these use cases in which macros are invoked without the bang. I'm sure that everyone will accept this explanation. The piggybacking also sounds like really useful behavior.
I wonder if these users are biased by how macro imports worked before. I suspect that if this was the syntax from the beginning then there wouldn't be such complaints.
I have a major reservation about the interaction of this feature and proc_macro_derive
's attributes
. The attributes declared by attributes
ignore scoping rules: they are imported automatically along with the parent derive macro, and they can be shadowed by other macros. Consider the following crate macros
:
#![feature(proc_macro)]
extern crate proc_macro;
#[macro_use]
extern crate quote;
use proc_macro::TokenStream;
#[proc_macro_derive(Hello, attributes(Bye))]
pub fn Hello(_: TokenStream) -> TokenStream {
(quote! { fn hello() { println!("hello") }}).into()
}
#[proc_macro_attribute]
pub fn Bye(_: TokenStream, _: TokenStream) -> TokenStream {
(quote!{}).into()
}
What does the following do for a crate that does the following?
#[Bye]
#[derive(Hello)]
struct T()
It turns out, it depends if macros::Bye
is in scope: if it is, then macros::Bye
is called and erases the definition. If it is not in scope, however, then it
It is easy to see how this works at a compiler level: the attributes
functionality in proc_macro_derive
does not actually declare anything but, rather, tells the compiler to delay handling the unknown attribute until after the macro is run. This is the same reason that the compiler will give an error if an attribute declared in attribute
is used elsewhere. #46390 seems related; but I can't pinpoint what exactly is up there.
That said, the behaviour is ferociously unintuitive, and stabilizing it now risks creating a nasty corner case which couldn't be easily fixed. Instead, I would propose the following:
proc_macro_derive
declares for export all the child attributes named in attributes
, exactly as if they were declared with proc_macro_attribute
. It is thus an error to export an attribute-like or function-like macro with the same name. It is not, however, an error to export the same child attribute for multiple parents; these are considered to be the same attribute.This would make the behaviour in #46390 and similar likely confusion issues well-defined, if not perfect.
EDIT: I forgot that derive macros cannot change the declaration, so the child attributes are fine to persist; they just are meaningless.
@alercah
TLDR: I agree that landing ~this PR~ https://github.com/rust-lang/rust/pull/50911 and https://github.com/rust-lang/rust/pull/52081 should be blocked on some future proofing for single-segment attribute resolution, especially after seeing what https://github.com/rust-lang/rust/pull/52081 had to do with attributes.
(Multi-segment attributes are already guarded by a separate feature gate, proc_macro_path_invoc
or something, and are not stabilized right now.)
Unfortunately, name resolution for attributes is complete mess, a part of which is already stable due to pre-existing builtin attributes and macros 1.1.
I hoped to keep whatever rules that exist now, but maybe cleanup them after macro 1.2 stabilization slightly, possibly with minimal breakage.
But perhaps it make sense to do it now, perhaps with a large hammer (i.e. prohibit any ambiguities between builtin, user-defined and derive-defined attributes if possible).
Thanks for bringing this up @alercah!
@petrochenkov do you have a possible solution in mind? It seems reasonable to me to prohibit proc-macro
crates from exporting an attribute and custom derives who recognize the same attribute name. For resolving attributes as well it seems like it'd be reasonable to prohibit a derive with a custom attribute and a proc macro attribute of the same name being in scope at the same time.
Would that be enough to fix the issue here? (pending better diagnostics in the long run of course)
Question again regarding importing macro.
Which one will be imported if I write: use core::panic
? The unstable panic
module, the stable panic!
macro, or both of them? Which one should be the correct behavior for Rust 2018?
@matematikaadit
Both (independently of editions).
@alexcrichton Oh my, I thought that the following was disallowed, but apparently it's not:
#[proc_macro]
pub fn bad(_ : TokenStream) -> TokenStream { "".parse().unwrap() }
#[proc_macro_derive(bad)]
pub fn deriver(_ : TokenStream) -> TokenStream { "".parse().unwrap() }
This is accepted by the compiler and only produces an error when you try to import bad
from this crate. It should definitely be disallowed. I don't think this technically needs to block stabilization, as you can't actually make any working code that depends on the behaviour, but see my comments below the line on why it probably should be fixed beforehand anyway.
I do not think that your suggestion is enough to resolve the issue. In my example above, if Hello
and Bye
were in different crates, your rule that focuses only on exports would miss the ambiguity that importing them both creates. Trying to enforce on import or use would help, but then we get warts on the macro system because use macros::Hello;
still pollutes the macro namespace with Bye
.
Unfortunately, we can't fix this purely from a naming perspective, because macros have no ability to look up names. Two derives with the same child attribute will conflict, because the only thing the macro has to go on is the token stream. I had thought that this wasn't a related issue but I was wrong, so I'm going to try to write up some thoughts about what the ideal fix might look like, and what would have to be done now to address it vs potentially stabilized subsequently.
// crate1/src/lib.rs
#[proc_macro]
pub fn conflict(_ : TokenStream) -> TokenStream { "".parse().unwrap() }
// crate2/src/lib.rs
use crate1::conflict;
#[proc_macro]
pub fn conflict(_ : TokenStream) -> TokenStream { "".parse().unwrap() }
This is accepted because conflict
is not declared in crate2
's macro namespace by the function declaration. Given that we need to fix the above case, this case should probably be fixed incidental to it, but this one should block stabilization if we're worried about futureproofing. If this code were accepted, then it could cause issues with local macro declarations because, in a world where crate2
can refer to its own macros, conflict!
becoming ambiguous would be a breaking change.
There's an issue in another direction: this is currently fine because of the limitations on procedural macro declarations (must be in root of crate, crate cannot export anything other than procedural macros), but poses another obstacle to using locally-defined macros: there is no way to declare a macro with a different name from the underlying function except for proc_macro_derive
, which would mean that a crate that currently exports only macro names changing to also export the underlying functions would cause a breaking ambiguity change.
We could work around the second concern by separating the namespaces more, as @dtolnay suggested above, but I think the rationale for keeping them is compelling. In the future of local proc macros, it might make sense to change syntax to avoid the ambiguity in declarations. For instance, we could imagine macro foo fn()...
declaring a procedural macro named foo
, or macro foo fn bar(...)...
declaring a procedural macro named foo
but with function bar
. Alternatively, we could just say that the proc-macro
crate type can only export proc macros, even after local proc macros are allowed, and it's a breaking change to convert a crate away from proc-macro
.
@petrochenkov so, If I only want to import the macro without importing the module, I should use the #[macro_use]
syntax, right?
@matematikaadit
I only want to import the macro without importing the module
Why? In practice this shouldn't matter most of the time.
Technically, you can use #[macro_use]
to achieve that, yes (more or less). use
doesn't support importing from a single namespace right now, but perhaps it will some day, but that's not a high priority.
@alercah
use crate1::conflict;
#[proc_macro]
pub fn conflict(_ : TokenStream) -> TokenStream { "".parse().unwrap() }
This case didn't bother me previously, because conflict
s are in different namespaces (macro and value), but perhaps we need to future proof this as well and make #[proc_macro]
generate a dummy macro, so it takes a macro slot and generate conflicts with other macro items/imports with the same name.
Module/item structure of a crate looking consistently from the outside and from the inside looks like a nice property.
use
of #[macro_export] macro_rules
available only from other crates already breaks this symmetry, but the fix making it available locally is in progress.
@petrochenkov yes, exactly.
@alercah and @petrochenkov I've opened https://github.com/rust-lang/rust/issues/52226 to track specifically the ambiguity between procedural macro attributes and custom derive attributes.
I'm a bit concerned about what seems to be a lack of a proper migration plan here.
When we implemented the other edition lints significant thought went into the migration plan, making sure that this was something that would smoothly upgrade with minimal annoyances. This took _way_ longer than we initially thought (if we had started now, we would not have made it in time, which is why I'm concerned).
Basically, while this is philosophically not a breaking change, it still bears many of the annoying hallmarks of breaking changes which we should help smooth over.
The current workflow on nightly is that you're just asked to remove #[macro_use]
crates and use imports. If you remove them, you have to manually introduce macro imports (already a problem). But even then, macro crates with nested macros break, and it's totally non-obvious what's going on
I did this upgrade on clippy -- out of the four macro crates it uses (one of them is rustc), three broke. It seems like macros on crates.io are inevitably of the complex kind, because simple macros can just be put in the code itself. One of these crates (lazy_static
) had a fix published thanks to @dtolnay, but the only reason I found that out that the problem was in the upstream crate was I happened to be in the same room as him and thought I'd ask. I had a nice half hour of barking down the wrong rabbitholes before that. The other two crates needed to be fixed, but at least I knew what needed to be fixed thanks to David.
We can, of course, say that these are all one-time things -- macro authors need to make this change once, and it's fixed for all their consumers. But it ... isn't quite. You still need to know that your macro crate needs upgrading, and people don't cargo update
willy-nilly. Usually I've seen folks bump individual crates, and macro crates change so rarely that they never get updated. Case in point: Servo still uses lazy_static
1.0.0 from last November, and a matches
from November _2016_ (both crates have had releases since then). But we don't even need project-specific anecdotes for this, Crates.io neatly shows that the majority of the downloads for lazy_static
are still 1.0.0 or earlier. Bitflags has a similar story with the majority being from november 2017's release or ealier
And again, it's not at all clear that the problem is from the upstream macro crate in the first place -- this seems rather different from how I'd expect an edition upgrade to go (it's supposed to be very local!), so there's no reason for me to expect the problem comes from there. The one hint I got that this may be a crate versioning issue was when I tried building a minimal example to report a bug, and it worked fine since cargo pulled in a new lazy_static
. (I didn't interpret this hint the correct way since I thought there was a discrepancy between the way my test crate and clippy were built)
This becomes even worse when it's not your crate exhibiting the problem; I (a binary) may upgrade my cool-libary
dependency to 1.1 which happens to use the new edition, and be met with compile errors that the authors of cool-library
did not have to deal with because their CI pulled in the latest lazy_static
. However my binary crate uses lockfiles so stuff is weirdly broken and I have no idea why. This feels close to breaking the interoperability guarantee of the edition. We have tons of _theoretical_ examples of how the edition interop guarantee can be broken with macros, but this is a practical one (albeit it's not _exactly_ breaking the guarantee).
This is all to say that right now I expect this to cause a pretty confusing and annoying migration. I think we should come up with something concrete that solves this, and if we can't make it in time, exclude this from the edition idiom lint system and make it a normal (but edition-gated) lint later.
A rough proposal is:
local_inner_macros
thing. I feel that while the crate will continue to compile, things like this still belong under future compat.cargo fix
should do _something_ about these macro crates, even hardcoding a list of semi-popular macro crates that need bumping should be okay. We can likely detect such crates via a special crater run that uses the lint from 1., get authors to publish updates if they haven't already, and hardcode this list.local_inner_macros
, linked to from the main lint. I suspect some of this is already planned.Given the amount of time it took for us to figure out and implement the migration plans for the other edition things I'm really skeptical we can make it, though.
(For future editions I'm considering RFCing for explicit migration plans and compiler team input on the feasability of lints on any RFC proposing edition changes. I spent a _lot_ of time on this this year, most of which was just getting everyone on the same page on what the plan was supposed to be)
Perhaps edition lints shouldn't push people to immediately migrate to newest and shiniest idioms so aggressively, that's the single reason this is even an issue.
This doesn't pose a problem when new features are implemented and then gradually adopted by ecosystem in non-forced way.
@petrochenkov
Perhaps edition lints shouldn't push people to immediately migrate to newest and shiniest idioms so aggressively, that's the single reason this is even an issue.
This doesn't pose a problem when new features are implemented and then gradually adopted by ecosystem in non-forced way.
FWIW, what you're suggesting here is the "plan of record" as far as I know: the idiom lints are not expected to be on by default when the Edition ships, and we plan to take more time to get them fully polished before changing that.
We're still asking people to voluntarily enable the lints just after transition, which means imperfect idiom lints at the time of the transition are still problematic.
We can ship this as a disabled-by-default lint that is _not_ an idiom lint, which later _becomes_ an on-by-default-on-2018 one. The mechanism allowing us to make lint levels edition-dependent is wholly separate from the mechanism marking lints as idiom lints.
But as it stands even if they're not on by default, we are asking people to use them post-transition, so we should either stop asking people to do so or ensure they're polished by the transition.
I'll chime in about nested macros being a problem. This is what I ended up with after enabling the idiom lints:
use error_chain::{error_chain, error_chain_processing, error_chain_processed, impl_extract_backtrace, quick_error};
use lazy_static::{lazy_static, __lazy_static_internal, __lazy_static_create};
I constructed these lists by wading through "macro not found" errors (which were interspersed with other errors caused by the unresolvable macro invocations being summarily discarded). It's not a great experience or result, IMO. Of course, one solution is use error_chain::*;
but that pulls in non-macro items as well.
A lot of crates use this "auxiliary macro" pattern. There are solutions available, like putting macros in a module so you could write use some_crate::macros::*;
, or using @
-rules to avoid auxiliary macros, but that's relying on crate maintainers.
To bring you up to date: the fix for that is to update lazy_static and to apply #[macro_export(local_inner_macros)]
to any other crates, not to import macros. All this has been discussed earlier.
This is still a requirement on macro crate maintainers, though.
@Manishearth I probably missed the discussion, but could #[macro_export(local_inner_macros)]
be applied automatically for crates that are compiled under the 2015 edition?
I think this is the "fallback" option discussed previously which had some implementation issues I haven't attempted to understand.
Theoretically we could, and we could nudge macro crates to do this when they upgrade. Practically iirc there are issues. Maybe this specific proposal is fine though.
@Manishearth that's cool advice for people maintaining crates, but doesn't help end users at all.
Hence the "This is still a requirement on macro crate maintainers, though.", I don't disagree with that.
You're arguing with the wrong person here :smile: , I was just bringing you up to date on the situation.
At the very least, we should have a section in the Edition Guide about "What to do when a crate's docs recommend #[macro_use] extern crate foo;
" and maybe a lint that specifically collects the "extra" macros you need to use
.
Note that the preferred migration path is not #[macro_export(local_inner_macros)]
but using e.g. $crate::__lazy_static_internal!(...)
to invoke local helper macros. The local_inner_macros
attribute is a backward compatibility provision for foundational crates like lazy_static and serde that are unwilling to drop support for old1 compilers at this point.
1 where "old" comically means anything older than rustc 1.30.0 which isn't even yet the nightly today.
So the "preferred" path is for crates to drop compatibility with 2015 in
order to support 2018? That's no good, local_inner_macros seems much better
then. Don't split the ecosystem.
On Wed, Jul 25, 2018 at 12:38 PM, David Tolnay notifications@github.com
wrote:
Note that the preferred migration path is not #[macro_export(local_inner_
macros)] but using e.g. crate::__lazy_static_internal!(...) to invoke
local helper macros. The local_inner_macros attribute is a backward
compatibility provision for foundational crates like lazy_static and serde
that are unwilling to drop support for old1 compilers at this point.1 where "old" comically means anything older than rustc 1.30.0 which
isn't even yet the nightly today.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/rust-lang/rust/issues/35896#issuecomment-407818261,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC3n0rYJ77_vxru97Eubn3IRIkupQC6ks5uKJ71gaJpZM4Jp46C
.
It is not possible for a crate to "drop compatibility with 2015." Editions are not the same as rustc versions, 2015 still exists in all future compilers after 1.30.0.
Maybe I misunderstood. So if you use $crate::
in your macros, you'll be compatible with both editions, but only in future compiler versions? I still feel that we should be "recommending" the most backwards-compatible solution we have.
You have three options as a macro author right now:
$crate::inner
: No longer works on old compilers, works on the old edition. Fixing the warning does not cause errorslocal_inner_macros
: Works on old compilers, works on the old edition. Fixing the warning does not cause errors.It sort of depends what you mean by compatible. It is possible to make a crate "not compatible with 2015" in the sense that removing edition = "2018"
in its Cargo.toml
would make it not compile anymore. But it is still compatible in the sense that other crates in the dependency graph can be on a different edition, so there is not Python-3-like ecosystem split.
Use
$crate::inner
: No longer works on old compilers, works on the old edition. Fixing the warning does not cause errors
Use
local_inner_macros
: Works on old compilers, works on the old edition. Fixing the warning does not cause errors.
Does "old compilers" here mean 1.30
while "new" is 1.31
? Why is local_inner_macros
more portable than $crate::inner
?
EDIT: I didn't realize that #[macro_export(local_inner_macros)]
(or #[macro_export(foo)]
) already works on stable, even if it has no effect.
$crate::
qualified macro calls imported through macro_use$crate::
qualified macro calls imported through module system (2018 vision)Note that your dependencies select A/B/C and your crate selects 1/2.
| | 2015 edition | 2018 edition |
|---|---|---|
| rustc \<1.30** | • Supports A1, B1
• Error on A2, B2, C1, C2 | (not supported) |
| **rustc >=1.30 | • Supports A1, B1, B2, C1, C2
• Error on A2 | • Supports B2, C2
• Warning on A1, B1, C1
• Error on A2 |
A few things you can see from this table:
#[macro_use] extern crate
alive.Very helpful table! I still feel that we should be recommending B (for all macro crates -- not just "foundational", which seems like a too-fuzzy line) until "old" compilers are sufficiently old. And changing cargo fix
to suggest A1 -> B2, I guess.
Local_inner_macros can be more complicated to use than $crate::
in the case that one macro needs to invoke both helper macros and also macros from the standard library, because it declares that every macro invocation be resolved as a local helper macro. This can require a substantial amount of refactoring that is not needed when using $crate::
.
It would be unusual for cargo fix
to suggest A1 -> B2 because this requires code changes outside of the crate being fix'd.
Another problem with local_inner_macros: #52726
I guess since cargo-fix can't assume anything beyond the current crate, the only things it could do that would actually work are (a) nothing, or (b) somehow crawl the macro expansion and see which macros you really need to import.
When cargo-fixing a macro-exporting crate, there's a question of whether to suggest A->B or A->C.
It sort of depends what you mean by compatible. It is possible to make a crate "not compatible with 2015" in the sense that removing edition = "2018" in its Cargo.toml would make it not compile anymore. But it is still compatible in the sense that other crates in the dependency graph can be on a different edition.
Yes, I would say it this way: a library itself uses only one edition, but is always compatible with all editions - that is, you can depend on it whatever edition you are in. When we say that multiple editions can be compiled together, I think we are not expressing this compatibility forcefully enough: it is not possible to write a library that can only be depended on by crates on one edition or another. Library authors don't even have to worry about compatibility with multiple editions; they simply are compatible always.
(Also unless somethings changed recently (and I don't think so, based on @dtolnay's post), the only thing editions change related to this issue are turning on some lints. Macro imports don't behave differently between 2015 and 2018 edition.)
@dtolnay I want to clarify one thing based on your chart: you suggest that the "1" column options (using #[macro_use]
) will be linted against in 2018. However, based on @aturon's most recent comment, these lints will be allow by default until the ecosystem transitions more.
@Manishearth makes this comment that I don't understand:
But as it stands even if they're not on by default, we are asking people to use them post-transition, so we should either stop asking people to do so or ensure they're polished by the transition.
I don't know what it means to "ask people to use them;" I haven't heard about this plan and it seems dubious. If we think people should use these warnings, we should have them turned on. As a rule, I do not believe we should ever have allow by default lints we recommend that you turn on; this is use strict;
and to me its a sign of something gone quite wrong.
From my perspective, it seems like we should evaluate each 2018 idiom lint for disruptiveness and turn it on as soon as it seems like a net benefit. This will probably leave the macro related ones off for some time until the ecosystem has moved off the "A" system onto the "B" and "C" systems. Its unfortunate, but its the long term cost we're paying for having stabilized the "A" system for 1.0.
As a rule, I do not believe we should ever have allow by default lints we recommend that you turn on;
It's not quite this, it's not "use strict";
. We recommend you _temporarily_ turn them on, like we do for the migration lints.
The original plan was that we recommend a two step edition migration process. In the first step you turn on the migration lints and fix those, then you upgrade the edition in Cargo, and do the same again with the idiom lints. Ideally, this would be managed by cargo fix. The idiom lints aren't ones you keep enabled in perpetuity; you flip them on when you use cargo fix (or cargo fix flips them on for you), and when you're done with the upgrade you flip them off. We can make them on by default on 2018 a couple months into the edition if we wish, with the hope that everyone has cargo fix
ed them already.
The reason they're not just on by default is because these lints are super noisy and really need to be run with cargo fix.
This is all being discussed in https://github.com/rust-lang/rust/issues/52679
Given that many of the idiom lints are rather broken the current plan may just be to not recommend this for a while, and have a gradual rollout.
@Manishearth thanks for the clarification! I think really when you say we recommend that you turn them on, the issue is what behavior cargo fix
demonstrates in light of this: the lint is really just a part of the API between rustc and cargo here, mostly an implementation detail (ideally, cargo fix flips the lint on and off for you, so that cargo fix
can be a single atomic step).
I'll reply more about the general problem on the issue you linked.
So I think the open issue here can be scoped down to this: what will cargo fix
do about macro imports, given that the upstream crate needs to be compatible first? And how?
If cargo fix
can convert macro authors to the C
scheme ($crate::
) for their macros, that seems ideal. There's no reason I see to convert them to B
since by depending on 2018, they inherently the require the 1.30+ compiler.
Assuming cargo fix
can upgrade a user from 1 (#[macro_use]
) to 2 (normal imports), I think there are three reasonable options:
#[macro_use]
to normal imports, because we assume too many of the upstream crates won't have upgraded yet.Its possible that our behavior should change over time.
It also occurs to me that if we do either 2 or 3, the user is likely to have a lock file that locks them to a version of the package that is incompatible. If we do any of these fixes, we should probably cargo update
the macro-exporting package as a part of cargo fix
so that users will be more likely to get the fixed code.
It also occurs to me that if we do either 2 or 3, the user is likely to have a lock file that locks them to a version of the package that is incompatible. If we do any of these fixes, we should probably
cargo update
the macro-exporting package as a part ofcargo fix
so that users will be more likely to get the fixed code.
Since I was just reading about --minimal-versions
, this should maybe be a cargo upgrade
as otherwise the users Cargo.toml
will be claiming to work with a version that it might not actually work with.
Probably a bad idea, but we could add another hack: #[macro_export(trust_me_its_2018_compatible)]
, for macro authors to use once they've updated to B or C. Seeing this attribute, cargo fix
could recommend C2.
What I still don't get is the business about "foundational crates". Sure, crates that are managed by the core team are special and we trust y'all, but how is Joe the Macro Author supposed to know whether their crate is special enough to ignore the cargo fix
advice? Should there be an official guideline, like, "you can apply this fix when you are ready to drop compatibility with rustc v1.XY"?
@durka Since the version needed to compile a project with rust = "2018"
in the Cargo.toml is greater than or equal to the version needed to support option C, anyone running cargo fix --prepare-for 2018
has opted into dropping compatibility with rustc versions that don't support option C. For now, each crate author decides if they want to go onto 2018 or if they want to continue to support rustc versions from before the 2018 release.
But lazy_static will be threading the needle by upgrading to B.
Be discerning: we use some method (a whitelist, a heuristic?) to upgrade the user to 2018.
Since options B2 and C2 work the only thing you have to watch out for is a dependency still using A. I assume detecting that a dependency is using option B is possible since that presumably is reflected in the metadata, so that should be easy to upgrade. Option C is likely undecidable, but the heuristic could just be "if dependency is Rust 2018 then they should be using option C" and upgrade the user (will break if a crate has updated to Rust 2018 but not transitioned to option C)?
@withoutboats this plan seems pretty good! I think changing behavior over time is definitely something we should also try for; be pessimistic at first for macro _users_ , and over time start suggesting fixes. There's also a lot of interesting stuff here that can be done by teaching cargo fix about good and bad macro crate versions.
Building on @Nemo157's comment, if we can get this metadata for each of your dependencies:
local_inner_macros
?That could be a good heuristic for whether or not we should upgrade their macro invokations. But I'm not sure how well the current set up allows rustfix to operate differently depending on the dependency metadata like this.
We could consider disallowing option C on 2015, so that anyone who wants to stay on 2015 will switch to B, which we can detect, and anyone who wants to switch to C will move to 2018, which we can also detect.
Well, we have crater, and we can use that to obtain this metadata ourselves and hardcode it (and hardcode what versions fix the problem)
I don't expect there to be _that many_ macro crates affected, especially if we only consider popular ones. The problem is that people _depend_ on these.
Wait, why is this still open? Everything on the roadmap is checked off. What's blocking this other than working out the kinks of a potential edition lint?
Tracking issues close when stabilized.
I have a question. Why do we need to import macros to use them? For example:
extern crate crossbeam;
use crossbeam::channel;
// Doesn't compile unless we uncomment this line:
// use self::channel::select;
fn main() {
channel::select! {
default => {}
}
}
It is surprising to me that this code doesn't compile unless we import the macro. Is this intentional behavior or a bug?
@stjepang maybe channel::select!
expands to an invocation of plain select!
? That select!
invocation would be the one that is not finding a resolution in scope.
The expansion should use $crate::channel::select!
instead (or macro_export(local_inner_macros)
if you need to support compilers older than 1.30.0).
@dtolnay Oh right, that was indeed the issue. Thank you! :)
Could I please request some transparency on what exactly is implemented by this "tracking issue" and what continued plans there are related to RFC 1561?
My understanding of the process is that a tracking issue is not a place for discussion of how new features work (other than internal details), however I see a lot of discussion here of what exactly the new macro modularisation rules are. Perhaps part of the problem is that RFC 1561 is vague and far too broad.
For example, 1561 declares the following which does not appear to be covered here (and does not appear to be possible on the latest nightly under either edition):
If a macro
baz
(by example or procedural) is defined in a modulebar
which is nested infoo
, then it may be used anywhere in the crate using an absolute path:::foo::bar::baz!(...)
. It can be used via relative paths in the usual way, e.g., insidefoo
asbar::baz!()
.
All documentation I can find is either hopelessly out of date or refers back to this issue.
@dhardy unfortunately the transparency here is all written down, but it takes some effort to sift through it. What's stabilized here is described online and further tracking issues track remaining work items for known unstable items in the compiler. Work that hasn't ever been implemented from the original RFC doesn't currently have tracking issues, but they can definitely be created!
Most helpful comment
Here is a hopefully clearer and less shouty writeup of my perspective in anticipation of the lang team discussion.
At issue
Enabling
use
of individualmacro_rules
macros from another crate.Background: new features
We are all used to new language features and have seen some fantastic ones recently. From the point of view of a particular library, new features generally break down into one or multiple of:
Features that make it possible to solve some problem that was impossible to solve before in Rust. For example
union
allows a sys crate to expose a signature ABI-compatible with some C function that passes unions. Before Rust 1.19 it just couldn't have provided a binding for such functions. This is great and expands the world of problems that Rust is suitable for.Features that prompt users to redesign the API of a library that was previously designed and working. For example Rust 1.20 added associated constants, prompting a redesign and major version bump of the
bitflags
crate. Bitflags existed and worked decently well before 1.20, but the new feature provided a materially better way to solve the problem bitflags intended to solve. The authors took into account the benefit of the improved API as a tradeoff against the cost of rolling out a change to the API. This is healthy and it is great when language features are adopted because of the API design improvements they make possible.Ergonomics improvements that make Rust code easier to write, read, and maintain without much affecting API design. An example of this is default binding modes in match. These are great and a quality of life improvement for beginners as well as experienced users.
Background: foundational crates
A defining characteristic of a foundational crate like Serde is that only type-2 features are relevant to us.
The library addresses a particular problem domain, so unless we are expanding the problem domain, type-1 features geared toward things that couldn't be done before are not relevant. The library does a thing, so the thing it does is not one of the things that cannot be done prior to the new feature.
We pay close attention to type-2 features. For example if some future version of generic associated types makes it possible to do what Serde does using a radically nicer API, we would redesign the API and release the improvement as a breaking change.
We don't pick up type-3 features until such time as we bump the required compiler version for type-2 reasons. This has to do with how much weight is on one side of the tradeoff that exists between benefit of ergonomic improvements to development within the Serde codebase, versus cost of pushing a compiler upgrade to users. Regardless of how big an ergonomic improvement may be, there exists some threshold of number of downstream users beyond which their upgrade friction outweighs our internal ergonomic benefit.
Why the change is concerning
The thing that is unprecedented about
use
imports ofmacro_rules
is that, while it does not break existing library APIs (a non-starter), it does break users' expectations of existing library APIs.That is, compiler developers would categorize the change as type-2 in the sense that libraries today expose a working API that looks like
#[macro_use] extern crate log
, and the new feature gives them a way to expose a nicer API that behaves more like imports elsewhere in the language,use log::warn
, if they choose. It is easy to see this as no different from a canonical type-2 new feature like associated constants. The authors ofbitflags
would have the duty of deciding whether the API improvements afforded by the new feature outweigh the cost of rolling out a change to the API.But in this aspect the point of view of compiler developers diverges from that of library developers and library consumers. Unlike compiler developers, everybody else does not perceive the API of a crate as "it exposes such-and-such macro importable through #[macro_use]". Rather, they perceive the API as "it exposes such-and-such macro" and independently, "here is how macros in Rust are imported."
The distinction is important because it deprives library authors of the choice of following their ordinary type-2 decision process. As an author of a crate that exports macros I cannot weigh the two choices and decide whether to stick with
#[macro_use]
for now or jump touse
. Outside of my control, and regardless of anything I may write in documentation, users will expect to be able touse
my macros because that is how they understand macros are imported in Rust.When they write a
use
and receive the following error:then depending on their personal experience the user will either blame Rust ("macro imports sometimes work and sometimes don't work, what am I doing wrong?") or blame the library ("I tried to use your library and your code does not compile"). Either way the ecosystem feels flaky and perpetually broken in a way that it doesn't today.
2018 transition
The whole thing is only a problem during a brief 2018 transition period right?
This is true. The transition period only lasts until the macro-exporting crates people use have bumped their minimum supported compiler version and moved to invoking helpers through
$crate::helper!
syntax.It is hard to say how long that would be because library authors have differing opinions about how to do this correctly and each library will have hard decisions to make: Do we force the ecosystem through a serde 2.0 upgrade? Do we prolong the transition period during which our API feels flaky and broken when people try to use it in the ways they expect? Do we aggressively force a compiler upgrade on users by breaking their builds? Regardless of which way you would decide, notice how all three of these result in an ecosystem that feels unstable.
It is possible that our attention to stability has over-indexed on one aspect of stability: the "lifespan" of code meaning how long before a compiler change breaks the code and it no longer compiles (which we promise is never, with some well-reasoned exceptions). This thread brings up "healthspan" as a different aspect of stability, how long before a compiler change breaks users' expectations around code by dropping it into a "transition period" that requires code changes to escape out of.
Large codebases
My experience in some large codebases leads me to value lifespan and healthspan as equally important. If it is expected that compiler upgrades are going to require ongoing maintenance investment in the form of periodic source-level changes to escape out of "transition periods", the value of never being actually broken by a compiler upgrade is greatly diminished.
In a large codebase we require the ability to write a library, finish it, and trust that it will age well until a type-2 redesign of its API.
Also the larger the codebase, the longer it takes to adopt compiler versions. Suppose that through perfectly legitimate inference breakage or soundness fixes we break X% of source lines of code every release (where X is a number much less than 1%). Those take increasingly long to work through. Also large codebases are increasingly likely to hit blocking perf regressions, again taking time to resolve. All of this means that for a large codebase there is value in foundational libraries supporting a generous range of old compilers.
Path forward
The notion of healthspan is why I was particularly excited about the approach in @petrochenkov's prototype https://github.com/rust-lang/rust/issues/35896#issuecomment-392393413 which entirely avoids breaking developer's expectations of existing APIs. That is, we would change their expectations, but we would not break them because their new expectations of being able to
use log::warn
would work seemlessly. I would love to see something like this adopted. I am grateful that you took the time to develop the implementation and I think it was an important thing to try. Thanks also to @golddranks for the idea and internals forum discussion in https://github.com/rust-lang/rust/issues/35896#issuecomment-390386189.That said, if the compiler team believes that the solution there is not tenable then obviously we can't ship it.
Long-term I expect compiler versions understood by Cargo will tip the scales heavily in favor of rapid adoption of new language features including by foundational libraries. I know Josh has been working on this and I am very excited about progress there.