It seems like a good idea to capture somewhere the feedback we receive on extension resolution.
Is there any information available on why file extension resolution was disabled?
@AlexanderOMara See
My feedback is that:
./foobar then of course I want to import js file, not mjs, cjs, wasm, css, json, or other. This is also "what users are familiar with" as you noticed. If you want to introduce explicit extensions then please introduce them for all other file extensions than js.https://unpkg.com/[email protected]/src/Three.js in Node. It's just silly thing in many aspects like reliability and security.import so it supports resolving bare paths from node_modules. Why wouldn't you make good defaults for file extension as well? (i.e. resolve to js by default)mjs. Specifying in my package.json that js files in my module use ES6 modules (or specifying entrypoint to ES6 modules file) should be enough. Most node module authors already do it this way and node is late to the party to suggest some other way, especially totally not backward compatible with status quo and basically requiring module authors to rename every file in their repositories.js extension a default when no extension is provided also doesn't make performance worse.So for the record I think forcing on community yet another .mjs and .cjs extensions was mistake to begin with and complicates already complicated landscape. Per-package configuration with package.type is already how it works so why on earth you require to confirm this additionally through renaming all files in repository. It's not like module authors will have half .mjs files and half .cjs files because even offical node docs now say we can't and shoudn't "mix them". Either all files in project will be in ES6 format or CJS format so no need for "file extensions" granurality.
UPDATE: I think this is actually the biggest problem: https://github.com/nodejs/modules/issues/323#issuecomment-511268264 , https://github.com/nodejs/modules/issues/352
The below, while not super pleasant, could be worked around with smarter transpilers relatively easily.
Before, it was possible to write ES modules (or TypeScript modules), and publish both CommonJS and ES modules with a simple module transformation via babel (I've actually started doing this already, so my modules can be treeshaken).
Now we can't write:
import {foo} from './bar';
console.log(foo);
We have to write:
import {foo} from './bar.mjs';
console.log(foo);
Which means that now any transpiler also needs to rewrite the actual import path. I don't think any currently have this functionality, because until now this wasn't an issue.
This is very different from how the ecosystem has worked so far. For example, TypeScript and Webpack both work just fine with the extension-less module resolution. I would guess rollup does too.
I thought the previous design made much more sense.
When I import something, naturally I expect to get an ES module, preferring the .mjs extension, probably falling back on a .js CommonJS module exported as an ES module.
If I require something, naturally I expect to get a CommonJS module, preferring the .js extension which can't really be changed for backwards compatibility, probably falling back on loading a .mjs ES module exported as a CommonJS module.
I get that some people don't like the new file extension, but I don't think it makes sense to create new problems and add ambiguity just to keep it.
Some feedback in favor of not having extension searching on by default
Twitter poll about compat: 65% of 1295 people favor Browser compat
@jhnns on twitter: https://twitter.com/Jhnnns/status/1003201464716726272
@mhart on twitter: https://twitter.com/hichaelmart/status/1039529625100185600
@brianleroux on twitter: https://twitter.com/brianleroux/status/1039653429272952832
Announcement tweet for PR had 150k impressions and no negative feedback
Obviously none of this is scientific... but thought I could offer some balance to the feedback
Again tho, enabling extension resolution does not in any way prevent “browser compat” - this is purely a question of whether you want the preferences of one group (browser compat folks) to oppress another (back compat folks) by making the feature off by default - because on by default causes no damage, but off by default does.
Is browser compatibility really even possible? As soon as you import another module, isn't browser compatibility lost?
import {something} from 'some-module';
console.log(something);
In node, that would have to resolve to something in node_modules right? You're not expected to write import {something} from 'node_modules/some-module/index.mjs'; right?
Am I missing something here?
@AlexanderOMara Some browsers have shipped support for import maps which do make the code above work, assuming the appropriate import map is provided that tells the browser where some-module lives.
@ljharb I disagree with your claim that turning it on by default has no undesirable outcomes. I also think we can avoid boxing people into categories and creating an us vs them narrative. We all care about our ecosystem and making a great developer experience.
@AlexanderOMara the import map proposal
Chrome platform status: https://www.chromestatus.com/feature/5315286962012160
Tracking issue: https://bugs.chromium.org/p/chromium/issues/detail?id=848607
I think it would help a lot if the proposed changes to the ecosystem were better fleshed out and documented.
I don't think the changes are just about browser-support vs backwards-compatibility. As I understand it, there would also be a loss of certain functionality, but potentially also a gain of new functionality.
At this point, I'm honestly not sure which system I would prefer once all the pieces are in place, and currently I think it's a lot easier to see the negative implications of this change.
Let’s be clear: we _shipped_ automatic extension resolution in ESM. It’s --es-module-specifier-resolution=node. It’s just not on by default. The people asking for extension searching could simply be told to use that flag to turn it on. It’s an inconvenience, sure, but fairly minor in my opinion. The question is whether that inconvenience is justified by what’s gained from making searching disabled by default.
As far as I can tell, to @AlexanderOMara’s point, the biggest consequence of having searching disabled by default is that public package authors can’t assume its availability—which is a big benefit, in the eyes of many in this group, as that encourages packages published to public registries to be cross-compatible with browsers by default (at least as far as resolution is concerned). That’s what would be lost if the default is flipped. To me, that benefit outweighs the cost of users needing to use a flag if they want this enabled in their projects.
From a package author’s perspective, what are the implications?
They need to publish their packages with import specifiers that include file extensions. This will annoy some authors. However, such packages will work in browsers without extra transpilation or building, assuming they aren’t otherwise incompatible (like by importing native modules or Node APIs that can’t run in the browser). Even if a particular package author is exclusively targeting Node, all the package authors who _are_ targeting browsers (and/or Node) benefit from a broader ecosystem of more cross-compatible packages, as that expands the number of potential dependencies that a browser-targeting package can import.
From a transpiler’s or to-JS compiler’s perspective, what new features are needed?
None. Transpilers can add the ability to add extensions at compile time, so a specifier like './file' can be rewritten to './file.js' during compilation along with whatever else is getting converted by the transpiler. This would provide similar UX without needing the flag, if the user didn’t want to either just type the extension or use the flag.
From a user’s perspective, what are the advantages and disadvantages of this change?
See above. They need to use a flag if they like this behavior, so that’s potentially a disadvantage. The advantage is that if they use any publicly published packages in a browser context, those packages are easier to work with in that they don’t require a build process or a specially configured server.
They're already not cross-compatible with browsers because they use bare imports. import maps work the same way with bare imports as with extensions.
In other words, unless we ban bare specifiers altogether, "work in browsers without extra transpilation or building" either is a) identically true with or without default extension resolution, or b) is identically false with or without it.
From a user’s perspective, what are the advantages and disadvantages of this change?
See above. They need to use a flag if they like this behavior, so that’s potentially a disadvantage. The advantage is that if they use any publicly published packages in a browser context, those packages are easier to work with in that they don’t require a build process or a specially configured server.
Suppose I'm making a module that's explicitly node-only. What do I have to do to load a native module? Do users have to add a flag just to use it?
Suppose I'm making a module that has a WASM component. Does that mean using node-only functionality, or is there a way to do that in a browser-compatible way?
Suppose I'm making a module that's explicitly node-only. What do I have to do to load a native module? Do users have to add a flag just to use it?
In general the ability to interpret different formats are not affected by resolution, so thats out of scope of this particular issue. However, the list of supported module formats could differ between Node and the browser and both have ways to intercept requests and translate to a different supported format (though I doubt the cost of doing so in the browser is worth it).
Suppose I'm making a module that has a WASM component. Does that mean using node-only functionality, or is there a way to do that in a browser-compatible way?
WASM requires all APIs be passed in via imports, I don't understand the question. Both browsers and Node are looking at supporting loading WASM format resources. This also seems out of scope of this issue.
Re import maps, see https://github.com/WICG/import-maps#extension-less-imports:
It is also common in the Node.js ecosystem to import files without including the extension. We do not have the luxury of trying multiple file extensions until we find a good match. However, we can emulate something similar by using an import map. For example,
{ "imports": { "lodash": "/node_modules/lodash-es/lodash.js", "lodash/": "/node_modules/lodash-es/", "lodash/fp": "/node_modules/lodash-es/fp.js", } }would allow not only
import fp from "lodash/fp.js", but also allowimport fp from "loadsh/fp".Although this example shows how it is _possible_ to allow extension-less imports with import maps, it’s not necessarily _desirable_. Doing so bloats the import map, and makes the package’s interface less simple—both for humans and for tooling.
This bloat is especially problematic if you need to allow extension-less imports within a package. In that case you will need an import map entry for every file in the package, not just the top-level entry points. For example, to allow
import "./fp"from within the/node_modules/lodash-es/lodash.jsfile, you would need an import entry mapping/node_modules/lodash-es/fpto/node_modules/lodash-es/fp.js. Now imagine repeating this for every file referenced without an extension.As such, we recommend caution when employing patterns like this in your import maps, or writing modules. It will be simpler for the ecosystem if we don’t rely on import maps to patch up file-extension related mismatches.
Suppose I'm making a module that's explicitly node-only. What do I have to do to load a native module? Do users have to add a flag just to use it?
In general the ability to interpret different formats are not affected by resolution, so thats out of scope of this particular issue.
I had thought the ability to import things that aren't JS modules was part of this issue. Maybe I was mistaken.
@GeoffreyBooth sure. and i think it's a mistake that the import maps proposal isn't yet providing a more flexible means to handle that use case - but that's not something that should make node constrain itself.
@ljharb I do not see this as a constraint, but rather a useful step in iterating so that we can ensure the end result of our process is what we desire step by step. I think going conservative with our ecosystem forwards compatibility is better than diverging for backwards compatibility. This is an issue about feedback but saying we are being constrained makes it sound like feedback about why extension resolution is useful wouldn't be accepted. We can iterate and improve our modules implementation over time :).
@AlexanderOMara importing things that are not js is not the issue. We have experimental support for JSON and a PR is open for WASM. We are also doing work with browser vendors and the wasm spec authors to standardize the same modules for browsers 🎉.
The bigger issue is that browsers will never do multiple network calls to resolve a file extension, so having specifiers throughout source text without file extensions creates a universe of "node only" code. It could be transpiled, but why should it have to be? Yes, people can always choose to write code with a subset if they wish to, but my feelings on the matter is that we will over time have a more stable cross platform ecosystem if we minimize the places where things diverge.
If you choose to use Node.js apis, you will find yourself in a place where your diverge... but it is possible to polyfill those specific APIS (see browserify).
@bmeck because of extensionless flies and type module, I’m not convinced that it won’t be a breaking change to add default extension resolution later.
The module system is not like other things; it can’t really be iterated on over time. Just like cjs, whatever we initially ship may be effectively the entirety of the system for the foreseeable future.
@ljharb
@bmeck because of extensionless flies and type module, I’m not convinced that it won’t be a breaking change to add default extension resolution later.
If this is considered breaking, it just means people could opt-in via a flag, like package.json or w/e.
However, we can be more concrete here with an example. The addition of an extension-less file in a place that it collides requires an invalid specifier:
/foo
/foo.js
import '/foo' would still find /foo before /foo.js with current algorithm for extension searching. Meaning the confusing case would actually be import '/foo' with:
/foo.js
Which would error without resolution. Moving from error to non-error seems ok to me and often is not considered breaking when adding features (particularly when some feature is missing like an API).
The module system is not like other things; it can’t really be iterated on over time. Just like cjs, whatever we initially ship may be effectively the entirety of the system for the foreseeable future.
I disagree, CJS has evolved (slowly) over time. ESM can do the same. Whatever initially ships might want to setup forward compatibility paths if desirable, but has no clear reason it is unable to match CJS' evolution over time. Part of the problem with CJS is the outstanding number of features and dynamic behavior that it exposes publicly, and we are not looking to duplicate that to my knowledge in the ESM implementation. If there are clear paths we want to reserve, we can do so; however, without clear reasons conservative iteration seems a good path forward.
Why would an extensionless file be selected over an extensioned one, if we have extension resolution?
@ljharb because thats how existing extension resolution works (see LOAD_AS_FILE); this had the historical reasoning that it should check the exact specifier first since it could be a real file on disk and the code did explicitly request that specific string so it should be checked before doing any magic.
Subjective feedback from someone who has used the latest ESM implementation in a few experiments recently: I really like how it works. I appreciate its simplicity and find it easy to understand and to explain (easier than CJS). I think having fewer automatisms (“magic”) is a benefit.
@ljharb please elaborate on what you meant here:
import maps work the same way with bare imports as with extensions
Specifically, how you see importmaps not working for "extensionless mappings" to "locations having the actual extension" — or whatever I might have misread.
Note: I'm thinking here of the symmetry of export maps for the same when they move ahead.
@SMotaal you can map from lodash to path/to/lodash.js the same way you can map from lodash/foo to path/to/lodash-foo.js. The only tradeoff is that potentially the size of the import map could be much larger if two things happen: 1) default extension resolution and 2) no corresponding import map feature ships to adjust to this legitimate use case.
I see, so I think it makes sense for authors of importmaps to adhere to one style over the other to avoid extra moving parts and complexity.
I’m not sure what that means - tools, not people, will likely be authoring most import maps, and the style can vary as long as there’s validation tooling (which there will be)
@ljharb When it comes to import map entries for extension resolution, I don't think the style matters so much (since I think you're right, it's unlikely anyone will write one by hand of any substantive complexity) as much as size concerns over bytes shipped in the import map (at least that's what I remember hearing @MylesBorins say), but I think that, too, is a red herring, since if you're counting bytes shipped you'll be inlining and minifying all your paths with tooling anyway and won't be using an import map (or likely any human readable paths).
In addition i think there’s a strong case for enhancing import maps to handle automatic extension resolution somehow, once we decide to go in the direction of automatic resolution by default.
As someone working on webpack, I agree with @rauschma: Less resolution magic is better. IDEs should support developers with imports. No one has to write these paths by hand. And if there's less magic involved, IDE support will get better.
The only resolution magic I see some benefit in is bare import specifiers. It's a good thing that we don't need to rely on a certain file layout in third-party packages.
Not everyone uses an IDE nor should they be forced to - everything is written by hand by someone.
@ljharb I agree with you here — but I think there are two extremes for things being complicated enough to need tooling:
Good complexity is redundancy, you need tools that will spare you things of almost zero cognitive dissonance, ie running the tool in your head leads to comparable mental overhead with far less fatigue 😄.
Bad complexity is automagicacy, you need tools to work right in order to know if they don’t, and by that time, you have not run such complicated things in you head for way too long to even know you are making the right assumptions.
Extension resolution by default… depends on what’s on disk, the extensions supported by the loader, the flags that make things like symbolic links work… that’s just on the surface, ie assuming everyone did their job right today including me.
Opting for a specific automagic resolver is one way you can always tell if it is broken… don’t use the flag here is that opt-in here.
Sounds fair?
Depending on these things is actually very important to the variety of use cases out there - that’s the way coffeescript, typescript, jsx, etc actually found purchase, because all users had to do was configure an extension instead of updating all their import/require/script tag paths.
In other words, the magic is important and good and imo necessary.
Speaking for CoffeeScript, at least, few people use require('coffeescript/register') (to register the extension) today. That was from an earlier era before build pipelines were well established. It’s discouraged nowadays because of the performance cost and because you can’t string together multiple transformations (like to transpile with CoffeeScript and then with Babel, for example). Modern build tools are far more powerful and versatile.
One could say the same for automatic extension resolution. There’s no reason it can’t also live on today as a plugin that people enable somehow, similar to coffeescript/register; but to be more in keeping with modern trends I would expect the resolution to happen at build time, rather than runtime, for transpiled languages.
Not every use case has a build step - notably server-only ones, which is where extension resolution is needed. It’s already the case that every bundler does the resolution at build time - having the algorithm automatically resolve extensions doesn’t and hasn’t stopped that from happening.
The worst thing is when tools can't agree on the resolution logic. I've spend hours configuring Webpack, TypeScript, ESLint, you name it, to find the right modules.
Not everyone uses an IDE nor should they be forced to - everything is written by hand by someone.
I think it's even the other way round. When we have this auto resolution logic, we will need full-blown IDEs that are able to understand these tools and their configurations. When there is no extra logic involved, it's just a simple lookup on disk which is also doable for simpler editors.
@jhnns we’ve had this resolution logic for all of node’s existence, and plenty of people don’t need or use IDEs.
@ljharb
@rauschma i agree we should optimize for how code is read, and that the existing resolution isn’t simple. However i think that leveraging resolution makes code simpler to read and understand. Separately, node ESM code already will unavoidably diverge from browser ESM code because they’re different environments; and if using a build process, the specifiers won’t diverge anyways (just like they don’t now, via Babel)
@ljharb but we're in a new era of modules for the JavaScript ecosystem at large, extensions matter, people choose the ones that they need to indicate intent (give me the .js vs .json). So by having more than one file with the same name and different extensions, it can be considered the intent to intend for people to pick the one they actually intend to use… would that not be a completely different premise for a more timely argument?
@SMotaal you should not need to know the format of the module you’re importing from the import side; the only actual choice of extension is on the author’s side - so that doesn’t pertain to resolution unless, as you say, there’s multiple files with the same extension (which is uncommon) - so in that case, whether there’s automatic resolution or not, you’d need the extension to disambiguate. That doesn’t negate the usefulness of extension resolution in the common case.
@ljharb unless you do, right?
I'm just trying to find a reasonable balance, only if you need to, should you need to… know about the implied format of the module you're importing, and so better stating this is to say in some — and maybe for some most of the time — you will not need to know about the implied format from the consumer side.
updated to reflect Brad's catch — assuming you use extensions to signal format 😄
How is format involved here? Knowing a file ends in .js doesn't let us know its format (in our case it can already by 2 different formats depending on context). We should try and keep this about the extension searching and not on determining format.
@bmeck until we have #283, most extensions are not ambiguous and do convey format; .js is a special case that doesn't.
@ljharb the point remains that it doesn't even without #283
you should not need to know the format of the module you’re importing from the import side
This would be the ideal, but I've never experienced that in a real-world project. Still today, I need to know about the module system and need to care about compatibility when using third-party libraries. And if we allow some resolution behavior that browsers don't understand, the situation wouldn't get better. We would be stuck with build tools like bundlers forever.
I think we should try to get rid of the necessity to use bundlers. People who just started to code should not need to setup such sophisticated tools. (Developers who want to have the best performance will always use these tools).
Extension resolution doesn't force using a bundler - just a tool to generate an import map.
Which, from what I can tell, is required without extension resolution anyway to create mappings for nested dependencies and their bare imports.
Import maps are not intended to be used for every file in a repo. Only deep
imports. It is not a fair comparison in this conversation. It is more akin
to the behavior of our extension map proposal.
I would argue that extension resolution on a per file basis would
significantly impact the architecture of tools for generating import maps
and significantly increase their size.
On Wed, Jul 10, 2019, 1:42 PM Wesley Wigham notifications@github.com
wrote:
Which, from what I can tell, is required without extension resolution
anyway to create mappings for nested dependencies and their bare imports.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/nodejs/modules/issues/323?email_source=notifications&email_token=AADZYV7YVK5D6YZDUDSTG2DP6YNOPA5CNFSM4HI6WDSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZUGWIA#issuecomment-510159648,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADZYV5LP3V463QWQGMPVNDP6YNOPANCNFSM4HI6WDSA
.
I wouldn't think any moreso than any other minor adjustments to the node module resolution algorithm meaningfully increase the size of webpack... The core of the tool is always going to be "serialize the possible resolutions into a map" - and yarn pnp and tink already effectively do that (just to a different format) - the exact behavior of the resolver doesn't affect the tool itself that much.
It's a gap in the design of import maps that they don't allow for wildcard extension mapping, and the current lack of that facility doesn't in any way mean that they won't be used in that way whether it requires a large import map or not.
Stepping back for a second. I'm not seeing any major push back from the
ecosystem asking for this feature. Why are we still debating?
On Wed, Jul 10, 2019, 2:36 PM Jordan Harband notifications@github.com
wrote:
It's a gap in the design of import maps that they don't allow for wildcard
extension mapping, and the current lack of that facility doesn't in any way
mean that they won't be used in that way whether it requires a large import
map or not.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/nodejs/modules/issues/323?email_source=notifications&email_token=AADZYVYVY3MVMQOUHEPMB6TP6YT3DA5CNFSM4HI6WDSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZULNTA#issuecomment-510179020,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADZYV5YZPMF7IIPQTQKDO3P6YT3DANCNFSM4HI6WDSA
.
mostly agree with myles, although... on one hand, anyone who wants to use extension resolution can just generate an import map for it. on the other hand, i'm not entirely sure we've captured any meaningful portion of the ecosystem yet. the type issue with older npm versions wasn't found until it was deployed on the most used package on npm.
@ljharb I'm inclined to consider allowing wildcard extension mapping in import maps being an aspect looked into, and at the very least it might be worth inquiring for details in the spirit of making more informed decisions without generalized assumptions.
This thread has a lot of the same opinions, preferences, and while auto-resolution has been default for the CJS module system being a good enough sample for people to experience it for input — lack of it by default long enough is an opportunity for balanced sampling in spite of and with full respect to everyone's opinions and preferences.
Forcing it on by default or not is preference, and there clearly is enough on both sides expressed.
Sampling when it is only used with --experimental-modules gives only little in way of faithful sampling on preferences imho. So without making assumptions, it makes sense to at the very very least wait until there are appropriate conditions to sample for the preference argument.
@SMotaal the difference is that forcing it off by default effectively means nobody can use it outside of a top-level app; forcing it on by default does not limit anyone who wishes not to use it in their own code. No amount of sampling changes the power dynamic between these two choices.
There’s a separate issue for members of the modules group to debate this topic amongst ourselves. I think that’s the place for that debate to continue if people still wish to.
This issue was intended as a place to consolidate feedback from external sources, like the first few comments at the top of the thread, and I think there’s value in having an issue for that purpose; but the external feedback is hard to find when the thread is diluted with lots of debate. If it were up to me I’d mark all non-external feedback comments on this issue as off-topic, and future comments should be limited to neutrally linking to or quoting external feedback. And please use #268 for debate; many of the comments here are simply repeating points already made over there.
Sorry for missing the context here.
Another issue I've discovered with this change, after creating a babel plugin to resolve the import paths to the actual file path:
There doesn't appear to be a way to create a package which can be both imported and required, where importing will favor the ES modules, and requiring will favor the CJS modules, without requiring the use of the --es-module-specifier-resolution=node option.
Before we could just supply an extension-less main field in the package.json file, and each loader would favor either the .mjs or the .js file. Alternately, we could supply an index.mjs and index.js, and the same thing would happen.
Without --es-module-specifier-resolution=node now we only get the .js file, with extension-less main fields always leading to loading the .js file, and same things happens with index.js.
With the flag, it will still behave as before, but there's really not much point to resolving the extensions if end-users will have to use the flag anyway.
Unless I've missed something, there doesn't seem to currently be a way to make a package that plays nice with both ESM and CJS, without requiring that flag be used.
@AlexanderOMara that is a separate and known issue, see https://github.com/nodejs/modules/issues/273
@AlexanderOMara Having the same bare specifier resolve to separate entry points (e.g. require('pkg') returning the CommonJS root and import 'pkg' returning the ESM root) is intentionally impossible. This is briefly mentioned in the docs in https://nodejs.org/api/esm.html#esm_package_entry_points (see the last paragraph of that section) and discussed in more depth in #273 (in particular see https://github.com/nodejs/modules/issues/273#issuecomment-492408041).
Not giving a value judgment, just trying to provide context.
Thanks for the links! If I'm reading the docs right, it sounds like --es-module-specifier-mresolution=node shouldn't make it possible to main to resolve differently based on loader?
In any case, the current need for specifying a different entry point to an external package is probably a bigger hurdle for tooling then specifying the file extensions within a package (how should a transpiler know how to change it?).
@AlexanderOMara The future of --es-module-specifier-resolution=node is unclear. It was released as a way to elicit feedback from users regarding this issue, automatic extension resolution. It’s not decided whether the flag will remain a part of Node when --experimental-modules is unflagged.
In particular, #273 is blocked because there is a concern that having a single specifier resolve to different files in CommonJS and ESM is a hazard. Even if --es-module-specifier-resolution=node becomes the default (as opposed to the current --es-module-specifier-resolution=explicit), that doesn’t necessarily mean that "main" would be allowed to resolve to separate files in CommonJS and ESM, because of that concern. I think that’s probably why @devsnek describes this as a separate issue, because we’ve been treating extension resolution and ESM/CommonJS entry points as not necessarily linked.
I suggest you open an issue with your concerns and we can discuss each of them in turn. Providing a good experience for build tools is one of our goals.
The future of --es-module-specifier-resolution=node is unclear. It was released as a way to elicit feedback from users regarding this issue, automatic extension resolution. It’s not decided whether the flag will remain a part of Node when --experimental-modules is unflagged.
More specifically, the resolver will _not_ ship with a flag like this to configure it when es modules are unflagged, so if you feel a certain non-default configuration is required, you should make your needs/opinion known~
Alright - I think we had a lot of time to collect feedback. What's a good way for us to evaluate what we've heard? So far there hasn't been a clear .mjs or #private style public outcry. Is that enough of a signal? Or are we looking at something more specific at this point?
It hasn’t been released unflagged yet; usage is still sure to be minimal. I think we need to leave this open to changing until there’s been time for that usage to rise.
I'd like to make sure that we have enough of a lead time so we don't end up in a meeting where it got put on the agenda 3 days before and nobody has anything to say. Should we set a clear feedback timeline? E.g. say that we're soliciting feedback on this aspect until X (mid January?) and make sure that we properly communicate that?
In the blog https://2ality.com/2019/04/nodejs-esm-impl.html it's written that:
Brief spoiler: The filename extension .mjs will be more convenient, but .js can also be enabled for ES modules.
Could you describe how it's possible / will it be possible? Is it already implemented?
as long as we have all this browser-incompatable and slow package.json stuff i'd struggle to find a reason not to reinstate node-style resolution.
@sheerun You can add "type": "module" to the closest package.json which will switch how .js files are treated. Note: this will affect all files, generally speaking. So within such a package you cannot use .js for CommonJS files (but you can use .cjs for example).
@devsnek Can you create an issue for the browser-incompatible things you found? I'd really like to keep this issue focussed on end-user feedback.
as long as we have all this browser-incompatable and slow package.json stuff i'd struggle to find a reason not to reinstate node-style resolution.
As a fellow member of this group, I would appreciate a little more respect shown toward the work we’ve accomplished together.
@jkrems i meant, if the reasons we removed the support in the first place aren't our goals anymore, we wouldn't need to worry about removing support anymore.
@GeoffreyBooth i don't mean any disrespect, i just mean it seems like the original reason we removed this feature isn't part of our priorities anymore.
i meant, if the reasons we removed the support in the first place aren't our goals anymore, we wouldn't need to worry about removing support anymore.
I got that. But my understanding is that we are still committed to having a module runtime where source code is browser-compatible. If we don't we definitely should either fix that or adjust our goals. But for that we'd need an actual breakdown of that's currently broken which is why I asked for an issue to discuss that specifically. :)
@GeoffreyBooth i don't mean any disrespect, i just mean it seems like the original reason we removed this feature isn't part of our priorities anymore.
I don't think priorities have changed on this - please say if you feel differently. We've known Node would have some divergence from browser compatibility. So the existence of that alone does not justify widening that divergence.
I've already spent a lot of time arguing against additional package.json fields, I'm not going to rehash it again. You can ignore my above comments.
@sheerun You can add "type": "module" to the closest package.json which will switch how .js files are treated. Note: this will affect all files, generally speaking. So within such a package you cannot use .js for CommonJS files (but you can use .cjs for example).
@jkrems The "type": "module" is solution that suffers from very serious issue node team itself acknowledges (that is: it's impossible to publish package that can be used by both node that supports "type": "module" and older ones that do not). Without fixing this issue package authors won't publish mjs packages even if this feature won't be experimental.
In the linked article it says "There are efforts underway to address this". Could you describe the efforts so far and if there is solution agreed upon?
@sheerun I think you may have misunderstood that section. It's specifically about letting people use the same specifier (e.g. "lodash") to reference either a CJS or an ESM implementation, depending on how the package is loaded. Afaik there are no issues with publishing packages that use type: 'module' today.
If your package uses ESM, by definition it won't work in versions of node that don't have ESM support (so everything but an upcoming version of v13). If your package has both ESM and CJS, you should use a different specifier today but that's not actually connected to the use of extensions or type: module. It applies to all packages that ship ESM and CJS in the same release.
We've set a deadline for end of January IIRC to decide if and how node will support having one specifier resolve differently in require vs. import which is what the section alludes to. :)
@jkrems Indeed I don't mean that it's literally impossible to publish package with "type": "module". It's impossible to publish one with it that supports both styles of imports:
const lodash = require('lodash')import lodash from 'lodash'Let's say I'm an author of lodash package that needs to support first style because already thousands of developers import it this way, but at the same time I'd like developers to be able to import it second way. What should my package.json look like?
@sheerun There's two very different issues at play:
The first one is relatively simple: Publish a new major that uses main and exports. This version will only support import in new node but still supports require in old node:
{
"main": "./old-node.cjs",
"exports": "./new-node.js",
"type": "module"
}
It becomes much harder for the second case: You still want to support use of require, even in versions of node that also support import. The solution we support for this right now looks like this IIRC:
{
"main": "./old-node.cjs",
"exports": { "./esm": "./new-node.js" },
"type": "module"
}
Why /esm? Because otherwise we're running into the exact issue outlined above: Same specifier would resolve differently depending on the loader. But you can try out our current solution right now using the flag --experimental-conditional-exports. You can already publish a package using that flag even though publishing any production packages today that depend on experimental features like modules is dangerous in general.
{
"main": "./old-node.cjs",
"exports": {
".": {
"require": "./old-node.cjs",
"default": "./new-node.js"
},
"./esm": "./new-node.js"
},
"type": "module"
}
But beware that it wouldn't work for CJS consumers on v13.x right now, at least not unless the --experimental-conditional-exports is passed.
@sheerun Can you please open a new issue for this discussion? So we can keep this thread focused on extension resolution.
Issue was opened finding the lack of extension resolution unintuitive
Comment from original https://github.com/nodejs/modules/issues/442
There is two downside in this
TS generate js code without the extension, so if I change ts output to "module": "esnext", it won't work.
Also it's quite common to use without extension, like react and vue. I think this will bring much confusion.

I think one factor to consider in regards to extensionless specifiers is performance. While building the resolver for Parcel, we noticed that checking a list of extensions as the CommonJS resolver does resulted in a huge number of stat calls which had a significant impact on performance. One of the reasons I’m excited about Node’s ESM implementation over others is that it requires extensions. It’s more explicit, more compatible with browsers, and more performant. If node sticks with it, this will have a big impact across the ecosystem for compatible build tools and node bootup performance.
FWIW, I think other tools should change over time to match Node rather than the other way around. We’re already considering a strict mode in Parcel to turn off CommonJS-ESM interop that’s so common in other tools right now. I hope other tools follow suit.
I think one factor to consider in regards to extensionless specifiers is performance. While building the resolver for Parcel, we noticed that checking a list of extensions as the CommonJS resolver does resulted in a huge number of stat calls which had a significant impact on performance.
FYI, I think in TS we work around that by just making a single query for directory contents, instead, then filtering that list, rather than just stating all the possible things.
I quite like that importing directory indexes is removed to match the browser. Feels like a little less magic.
I think one factor to consider in regards to extensionless specifiers is performance. While building the resolver for Parcel, we noticed that checking a list of extensions as the CommonJS resolver does resulted in a huge number of stat calls which had a significant impact on performance.
@devongovett Won't searching and parsing package.json of each imported file in search for "type": "module" or "type":"commonjs" have the same penalty? (except for files in the same directory which can be cached)
More - you need to walk the spine, checking every containing folder for a package.json (we cut off at node_modules right now, iirc), rather than just querying the contents of the directory containing the desired file.
This didn't take long: https://www.npmjs.com/package/fix-es-imports
Won't searching and parsing package.json of each imported file in search for "type": "module" or "type":"commonjs" have the same penalty? (except for files in the same directory which can be cached)
It doesn't look like src/module_wrap.cc caches every directory level, only the package.json files that are actually found are cached? If I load 100 files in node_modules/pkg1/dist/ each one that is loaded causes a failed lookup on node_modules/pkg1/dist/package.json then hits the cached copy of node_modules/pkg1/package.json? My C++ is not very good so sorry if I'm misunderstanding the code there.
It should cache the non-existence of a package.json here: https://github.com/nodejs/node/blob/f17111280e34756c4c8a2d5e7825379baff78e78/src/module_wrap.cc#L589-L595. So unless there's a bug, it would still do the loop but it wouldn't actually do anything but read from an in-memory map for the misses.
FYI, I think in TS we work around that by just making a single query for directory contents, instead, then filtering that list, rather than just stating all the possible things.
readdir is significantly slower than stat so in the best (common) case where it's a .js file (the first extension we try), performance would be worse. We did actually try this and it resulted in slightly slower performance overall than just stating all the file types.
Regardless, doing any filesystem lookup at all is far worse than doing none at all. Especially if we ever want to support loading imports over e.g. HTTP. Even for relatively fast local file systems, these lookups add up on large projects with millions of files so if it can be avoided it would be far better for performance.
Won't searching and parsing package.json of each imported file in search for "type": "module" or "type":"commonjs" have the same penalty?
I really hope this is only done at package boundaries not for each file. i.e. when you do import 'lodash' not import './some/file.js'. Relative imports should be within the same package as the parent (or they are being imported incorrectly), so no need to look up the type field again. If that's not the case, the I suggest changing the behavior for performance reasons.
As for transpilers, they can be updated to add file extensions during compilation. e.g. TypeScript could continue to support extensionless imports if it wanted, but compile them to include .js to point to the correct output files. This would keep the DX for TS users intact, but improve performance for runtime.
As for transpilers, they can be updated to add file extensions during compilation. e.g. TypeScript could continue to support extensionless imports if it wanted
Absolutely not. We have a policy of not rewriting your import specifiers over at TS. At best, we'll issue an error with a quick fix attached, we will not "paper over" the change for our users.
And again: performance of extension searching is a red herring. If you're not naive in your implementation, it doesn't matter how many extensions you support...
Absolutely not. We have a policy of not rewriting your import specifiers over at TS.
Why? You are compiling from .ts to .js. It makes sense that the specifier would change.
And again: performance of extension searching is a red herring. If you're not naive in your implementation, it doesn't matter how many extensions you support...
This is incorrect as I've already stated above. We tested readdir and found it slower. Nothing beats the performance of doing no work at all.
We have a policy of not rewriting your import specifiers over at TS.
This seems perfectly reasonable, @weswigham. It's a nice separation of concerns.
Is there a plan for how users will be able to import .cjs and .mjs files?
Why? You are compiling from .ts to .js. It makes sense that the specifier would change.
.ts in imports becauseThis is incorrect as I've already stated above. We tested readdir and found it slower. Nothing beats the performance of doing no work at all.
The point is that if you support multiple extensions and want to add support for more, the marginal cost should be near-zero, it shouldn't be a multiplier - if in the common case, a direct stat is faster than a readdir (doubtful in Windows - emuated stat is many more system calls), have you considered racing the two and attempting to work with the stat result if it comes back first? libuv is supposedly quite good at doing those kinds of fs operations in parallel.
Regardless, doing any filesystem lookup at all is far worse than doing none at all. Especially if we ever want to support loading imports over e.g. HTTP. Even for relatively fast local file systems, these lookups add up on large projects with millions of files so if it can be avoided it would be far better for performance.
TBH, I'd love it if user-defined startup snapshots could get worked out to solve this for node instead... That way much of parsing and linking and everything else could be by and large skipped as well, and is something unique to the node platform. But y'know, if you're looking for excuses to try to make node's dx match browser's (necessarily lackluster) dx, I guess extension resolution has always been an easy scapegoat.
This is probably where I remind everyone that before modules stabilize, we're _supposed_ to discuss weather we stablize with or without the extension resolution flag behaviors - the _assumption_ is not supposed to be that we unflag without it. If that's what's happened, our flagging strategy has failed, as we haven't been able to have the discussions we needed to, but since that's a resolution from a meeting about six months ago, I guess I'm not surprised that it'd be forgotten. The current implementation ships with the flag off so we can hear what people think of it, and weather it causes undue pain or churn, as going the other way (unflagged) is much less feasible. That's why this thread was started.
Is there a plan for how users will be able to import .cjs and .mjs files?
By adding support for those extensions and adding them to our resolution algorithm that matches the node one, probably. It's _rough_ right now, as the biggest still outstanding issue is actually that we need to assume every package we find is cjs and introduce a marker for esm package (and file) types, something we'd _hoped_ the final design would allow us to avoid, but has unfortunately been unable to do.
This is probably where I remind everyone that before modules stabilize, we're _supposed_ to discuss weather we stablize with or without the extension resolution flag behaviors - the _assumption_ is not supposed to be that we unflag without it. If that's what's happened, our flagging strategy has failed, as we haven't been able to have the discussions we needed to, but since that's a resolution from a meeting about six months ago, I guess I'm not surprised that it'd be forgotten.
It hasn’t been completely forgotten, we discussed it last week on this thread before unflagging. To be honest I haven’t been eager to have the argument again because it doesn’t seem like anyone’s minds have been changed. We’re still at the same stalemate.
I feel like if automatic extension resolution had never existed and it was proposed today, there’s no way it would get accepted into core. Folks would point out that the desired behavior can be achieved via custom loaders, and therefore that’s a better solution than baking this functionality into the core loaders and pushing the complexity and performance cost onto everyone.
At least with the flag it’s disabled by default, so it’s opt-in for the added cost, but like you said that flag was never meant to be permanent. It feels like we’re going to end up keeping that flag because the folks who are very determined that they want extension searching won’t accept dropping the flag without flipping the default, and so the flag will live on as a designed-by-committee unintended compromise. So yeah, I expect that if we go through all the arguments again we’ll end up with today’s status quo, and since that’s the case I haven’t felt any urgency to start up the debates again 😄
I feel like if automatic extension resolution had never existed and it was proposed today, there’s no way it would get accepted into core
This. It was always a bad idea, but now we're stuck with it. ESM is a clean reset point. Let's not repeat the problems of the past. Custom resolvers/loaders/transpilers can handle this outside core.
Your written specifiers is how you encode the desired access shape of your program into your code - we don't presume to rewrite that.
Yes, for your source code that's true. The TS compiler is generating the output code though, so it knows how the files it generates will be named.
ESM is not a "clean reset point", because users and tools and code still has assumptions baked into it.
The advent of ESM is not something that should be exploited to try to undo past decisions you don't like.
ESM is not a "clean reset point", because users and tools and code still has assumptions baked into it.
Incorrect assumptions. Tools like babel and typescript perpetuated the idea that ESM was just sugar over CommonJS. This was never true. Node has taken a much more measured approach thus far, and has so far treated ESM as a separate thing. I think this is great and should be continued.
The advent of ESM is not something that should be exploited to try to undo past decisions you don't like.
lol. I gave legit reasons, not just that I don't like it. I actually agree that it's convenient to not need to write extensions. But, I think that problem can be solved by other opt-in tools without bringing performance issues to all users.
ESM is not a "clean reset point"
I agree on this. I think it's very important.
But lots fo system calls can be a problem. Maybe when met import {a} from './a', we detect ./a starts with . but not ends of .js, we change it to import {a} from './a.js' internally.
Perpetuating past decisions that break spec in the name of inertia is even worse.
undo past decisions
It's not undoing past decisions. It's avoiding making a mistake by diverging Node's ESM implementation from the rest.
The last thing the broader JS ecosystem needs is a node-specific variation of the ES module format.
If users are married to the convenience of bare imports, transpilation and/or CommonJS are always an option. In that regard, nothing that already exists will change.
They are, which is why browsers are adding import maps - to catch browsers up to node, because in this case node is better (separately, bare imports will always work in node and will require extra work in browsers; that’s not relevant to extension resolution)
Better isn't exactly a quantifiable metric. If node-style import resolution adds 100ms to Time-to-First-Paint in browsers, that's a significant impact on performance.
The same applies to serverless endpoints running ES modules via Node.
If the source maps implementation can be made to work with bare imports without a significant impact on performance then -- and only then -- they'll be worth using on the client-side.
Until the impact of 'bare import' style resolution can be measured. Fast-tracking node-style resolution into the ES module loader is a mistake.
Import maps and node style imports are specifically for developers at Dev time - you'll never use import maps in an optimized production system, as you'd just inline and minify all the resolutions (and their paths) in that case. I do not understand any talk of the "perf" of import maps, as they will never be optimal for production perf.
Many users of SystemJS effectively use import maps in production workflows. They provide a number of performance benefits including the ability to update only some modules in an application without having to force users to re-download all the scripts again, simple A/B testing and tiered launches of features to users. By allowing the resolver to become a source of truth for the running of the application whole new possibilities open up providing better perf and dev experience.
I think it is important to separate the discussion of bare imports from
node resolution
We have the ability to measure the impact of import maps but my
understanding is that the performance implications are negligible for a
reasonably sized map.
On Sat, Dec 7, 2019, 6:15 PM Guy Bedford notifications@github.com wrote:
Many users of SystemJS effectively use import maps in production
workflows. They provide a number of performance benefits including the
ability to update only some modules in an application without having to
force users to re-download all the scripts again, simple A/B testing and
tiered launches of features to users. By allowing the resolver to become a
source of truth for the running of the application whole new possibilities
open up providing better perf and dev experience.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/nodejs/modules/issues/323?email_source=notifications&email_token=AADZYV6KNQSYV6QAO4VRSF3QXQVCRA5CNFSM4HI6WDSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGGRS2I#issuecomment-562895209,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AADZYV57HCQZRMQSZW267LTQXQVCRANCNFSM4HI6WDSA
.
@weswigham ES modules are standard module format not a transpiler source format. With no bundler/transpiler, there's no such thing as 'Dev time'.
The fastest package is the one that can be loaded from a warm cache, even across domains. The current practice of transpilation/inlining dependencies makes this impossible. Assuming ES modules will inherit the same patterns/practices as have been used in the past 5 years discredits their strengths.
Consider a source map that can load from a CDN, and fallback to a local copy. If 90% of packages load from a warm local cache, that's 90% of code that won't need to be downloaded or parsed. CDN hosted LTS releases of common packages could benefit many sites -- as well as -- reduce duplicate bloat in the local cache. The potential benefits are not insignificant.
@guybedford Having used System.js in the past, that's everything I hope for. If bare imports can be used as well without adding significant overhead, I'm all for them. I just don't want bare imports to be piggybacked into the ES module loader if they break from the web.
Consider a source map that can load from a CDN, and fallback to a local copy. If 90% of packages load from a warm local cache, that's 90% of code that won't need to be downloaded or parsed. CDN hosted LTS releases of common packages could benefit many sites -- as well as -- reduce duplicate bloat in the local cache. The potential benefits are not insignificant.
With double-key caching (recently implemented in Chrome) these benefits cannot apply between different origins, but the benefits do apply none the less for same-origin.
We have the ability to measure the impact of import maps but my
understanding is that the performance implications are negligible for a
reasonably sized map
It's an extra request...? The perf loss is in the extra network traffic that you just don't _need_ to have (esp. since import maps essentially block preloading for bare specifiers until the import map is loaded), not any runtime impact. If you rewrite all your imports to minimal length (fewest bytes) relative paths, you trivially avoid this extra network request. You can combine an import map with preload links to avoid the delay; but then you're effectively copying your map into your html, which is more bytes over the wire. Even then, for truly optimizing bytes sent (as the largest websites do), all those names are going to get minified to the fewest bytes possible. You could make the argument that you shouldn't do that, "to benefit from cross site caching" (which for security reasons isn't a thing in chrome anymore, as guy said above) - but I believe you'll find since major browsers (chrome and old edge, anyway) prefetch popular sites long in advance, for the largest, most complex sites, doing such a minification will still cut the most costs (where bytes sent = cost), and still have minimal time to render...
I don't get it, is what I'm saying. Import maps very much look like a developer convenience feature, not really something that could be used to _improve_ prod perf beyond what is possible today (except in the cached A/B test DI case guy brings up above, which does need a central registry of some kind to invalidate to be efficient - import maps can function as that, though any specifier invalidation mechanism will technically suffice). Because of that, I find any argument on "the perf implications of supporting X in import maps" confusing. Why does the perf of import maps matter? As far as I know, they were not intoduced to improve perf and even admit to adding _some_ overhead. If one can admit then, that they are for convenience when doing local/small development, why not optimize for that? Why is their perf a concern _at all_? Concern that they'll actually be used, because in the real world, for most projects, developer convenience trumps most other (even performance) concerns?
I'm confused about what we are debating at this point. Can we please keep
the topic focused on extension resolution. We can open a separate issue to
discuss bare imports or import maps
On Sat, Dec 7, 2019, 7:22 PM Wesley Wigham notifications@github.com wrote:
We have the ability to measure the impact of import maps but my
understanding is that the performance implications are negligible for a
reasonably sized mapIt's an extra request...? The perf loss is in the extra network traffic
that you just don't need to have (esp. since import maps essentially
block preloading for bare specifiers until the import map is loaded), not
any runtime impact. If you rewrite all your imports to minimal length
(fewest bytes) relative paths, you trivially avoid this extra network
request. You can combine an import map with preload links to avoid the
delay; but then you're effectively copying your map into your html, which
is more bytes over the wire. Even then, for truly optimizing bytes sent (as
the largest websites do), all those names are going to get minified to the
fewest bytes possible. You could make the argument that you shouldn't do
that, "to benefit from cross site caching" (which for security reasons
isn't a thing in chrome anymore, as guy said above) - but I believe you'll
find since major browsers (chrome and old edge, anyway) prefetch popular
sites long in advance, for the largest, most complex sites, doing such a
minification will still cut the most costs (where bytes sent = cost), and
still have minimal time to render...I don't get it, is what I'm saying. Import maps very much look like a
developer convenience feature, not really something that could be used to
improve prod perf beyond what is possible today (except in the cached
A/B test DI case guy brings up above, which does need a central registry of
some kind to invalidate to be efficient - import maps can function as that,
though any specifier invalidation mechanism will technically suffice).
Because of that, I find any argument on "the perf implications of
supporting X in import maps" confusing. Why does the perf of import maps
matter? As far as I know, they were not intoduced to improve perf and even
admit to adding some overhead. If one can admit then, that they are for
convenience when doing local/small development, why not optimize for that?
Why is their perf a concern at all? Concern that they'll actually be
used, because in the real world, for most projects, developer convenience
trumps most other (even performance) concerns?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/nodejs/modules/issues/323?email_source=notifications&email_token=AADZYV6N7VG3AZ45HJXRBHDQXQ43XA5CNFSM4HI6WDSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGGSRPY#issuecomment-562899135,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AADZYV22HBUCZSYBTQC56YTQXQ43XANCNFSM4HI6WDSA
.
If "import maps can't reasonably support it" is being used as a driving reason for "why extension resolution is bad", I do not see how the two conversations can reasonably be divorced, as one has been hitched to the other.
This issue is for discussing feedback, not for us to debate the feature.
On Sat, Dec 7, 2019, 7:39 PM Wesley Wigham notifications@github.com wrote:
If "import maps can't support it" is being used as a driving reason for
"why extension resolution is bad", I do not see how the two conversations
can reasonably be divorced, as one has been hitched to the other.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/nodejs/modules/issues/323?email_source=notifications&email_token=AADZYVYOMWATUZMN4N7N7E3QXQ64BA5CNFSM4HI6WDSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGGS2XY#issuecomment-562900319,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AADZYV3YS3W3SBQGA2EEKIDQXQ64BANCNFSM4HI6WDSA
.
Where can we debate the future then?
@guybedford I would imagine other issues about specific topics. TBH this issue has gotten side tracked multiple times and is pretty hard to follow / garner useful feedback out of. Perhaps we should close this an open a new issue for gathering feedback as I imagine folks new to the conversation will likely not want to read 100+ comments before chiming in
Right, it's the line between gathering feedback from users and ensuring we are having the debates within the group and not leaving discussions for another day where fleshing them out might lead us to better shared understanding.
@guybedford with that in mind a new issue specifically focused on re-evaluating the decision about extension resolution, enumerating the reasons for dropping it, and working towards consensus is imho going to be far more successful then continuing to pile on in this mega issue
There’s #268. We could also mark as off-topic all comments that aren’t user feedback.
Any objections to close this as completed?
I would still like us to have this but I assume I'm in the minority on that.
I don't think the state of the world in the WG has changed since this comment. Not that I think this issue actually had any way to actually be _acted_ on in the WG, as, without clear acceptance criteria, it's all too easy to simply _do nothing_ and let the people in the "extensions bad" camp have their way. It's still anecdotally clear to me that, between how people utilize export maps (to allow importing package files without using extensions because literally nobody expects to use an extension when pulling on a package entry point), import maps/package self names (to also get "nice specifiers" locally within a package), and the questions that still get asked, many users still prefer omitting extensions; we've just put a huge tax on doing so, that every project now pays (rather than having this incredibly common desire supported by default).
@weswigham sure, but this issue isn't getting feedback which is the purpose of this issue; the communication is happening elsewhere. I think this issue can be closed given that it is not proving useful.
If this is something members feel strongly about, open an issue, create a PR, build consensus. The path is there and open. No one who has been a part of this process should be left to feel their will was overrulled - the process has been completely open and transparent to input throughout and that's why it's taken so long. I understand it is work putting together a proposal. My own contributions were funded out of my own consulting income and limited time and were a large sacrifice. But it's always possible to push things in the direction you want if you really want to do that. There is nothing stopping others from pursuing work even if they don't have the full backing of their companies here too. Will is really the only limiting factor. And if things do not move in a direction you care strongly about, it is not because you were sidelined or pushed out of the process. No, rather, the blame falls flat on you - that you did not have the will to improve something you cared about. So please don't turn around and blame a process that was fully open to you.
FWIW: This is another quick Twitter poll on this topic. https://twitter.com/rauschma/status/1290694781509795842
I am merely pointing out that the process was constructed such that, of the two _clear_ camps the WG had at the start, the onus was clearly placed on the proponents of automatic extension searching to need to continuously advocate for it, as we were clearly deadlocked on the subject, and only by calling _not_ having it "minimal" did we get _anything_ done (FWIW, at this point, I regret not arguing that reusing the cjs resolver wholesale should be minimal, from an implementation PoV). I (or anyone else) have absolutely no obvious way forward with this at this point in time; I know there are members of the WG who will block on there _being_ extension searching. There is nothing new I can say or add. So, looking backwards, when we conceded that a minimal implementation did not need it, we conceded that this topic would need a continuous driver if it was actually going to happen (and some way to convince others, which isn't a given at all), both in terms of cost and time. In reality, both of those are finite, and this was essentially de facto decided on more than a year ago now; especially once we were able to table it as "temporarily unimportant".
So it's not that my position has changed, it's simply that the social capital of attempting to _block_ something is greater than the social capital required to _do nothing_. You can say we should "build consensus", but this is one of the subject areas where that clearly has never occured, in _either_ direction - the only way we progressed _at all_ was through _concession_, and _not_ consensus. (Not even mentioning the implicit browbeating into concession that occured anytime a block looked inevitable by mentioning moves to "rough consensus".) Nobody's fundamental views were changed, all that's occured is that some of the people carrying the torch have burned out from trying to get the point across. But it seems obvious that at least within the WG, those of us who care about this are in the minority (not that we have a record of that, since in our quest for absolute consensus we fear votes), and that seriously pushing it forward would harm our ability to advocate for _other_ areas, too (either in social capital or group dynamic), and it's a mistake not to acknowledge that.
As I said, there is no real way to drive this forward at this point in time; simultaneously, this issue was never _really_ settled, technically only tabled for future discussion. Even were it untabled and used as a blocker for marking as un-experimental (which is maybe a joke, having already come so far as unflagging), all I would likely be accomplishing, given the known state of other members, is my own social exodus from the WG (and two or three unproductive meetings), so there's no drive to do so.
I will state this outright: I am not, and have never been, convinced that we should stabilize without node-style resolution. The expectation gap (between transpiled modules and the implementation) is still huge, and the alternatives we offer shift the burden (for having "nice" specifiers) from runtime performance to developers minds (needing to remember to manually create and maintain specifier mappings) - in terms of cost, JS is already a very high level language and we _should_ be optimizing for the later in most tradeoffs (not that it's even a given that the cost of extension searching is meaningfully greater than what we support now anyway!). _However_, what I _have_ been convinced of is that there are people I _can't_ convince, and that _I_ am the one more readily willing to make concessions for the sake of progress on the system as a whole - history bears this out.
@weswigham my comment wasn't entirely focused on you but more to anyone here who feels they have been sidelined in this process. Specifically what I take issue with is the claim of being left powerless while at the same time taking no action or making no contributions towards the goals of the group.
It would have been far more efficient and less of a waste of time for this not to have been an open process. This was very specifically was a fully open process though, which is an absolutely tremendous thing to have accomplished.
I don't think pursuing any directions, even when against what might seem like a majority, would ever lead to lost social capital here. I'm personally glad you've been part of the process to date and that despite your frustrations you have remained in attendance. I'm also really sorry if you feel that the process hasn't met your ideals for what you were looking to accomplish. The things that likely to lead to lost social capital are unprofessional tone, personal attacks, attempts at strong-arming or being unable to compromise on deadlocks. Blaming the process is certainly not a good look though, and if there's a specific problem with the process - then work to change that in the process!
If you really want extension searching, the onus is on you to create a proposal and add an agenda item. The ideal timeframe for that would have been a few months ago, so that this point the bar is very high indeed to get such changes through. It would likely have a few blockers (yes likely including myself), but nothing a vote couldn't ultimately overrule with enough consensus building between those who care about it. Extension searching has always very clearly been a feature that would require a vote either way, so if you want to bring such a thing back to the table now is the last call certainly.
Extension searching has always very clearly been a feature that would require a vote either way, so if you want to bring such a thing back to the table now is the last call certainly.
I think what @weswigham rightfully called out is that by keeping extension searching out of the minimal implementation, there was a de-facto bias towards not having it. So there were two positions in the working group: Having browser-like behavior and having require-like behavior. And only a vote _should_ have resolved that impasse. But we went with a way where the browser-like behavior "won" by default, without ever needing to gather support in a vote.
What would be fair, I think, and also consistent with past discussions (IIRC), would be that the --experimental-specifier-resolution flag needs to be cleaned up. And the onus for that _should_ fall onto the people who benefitted from the default answer "back then" - the browser-resolution camp (myself included). And it may require a vote if the current consensus doesn't support committing to the status-quo.
Having browser-like behavior
Of note, and called out way upthread, browser-like specifier behavior is pretty much out the window at this point, what with export/import map support. The only leg left standing in this area is "the complexity of tools designed to translate from package.json export/import maps to browser import maps", which is a mightily specific thing to optimize the entire specifier resolution system for, IMO, and I'm _really_ not sure _the complexity of the internals of one or a handful of tools_ are worth _subverting years of existing expectations on all potential code_.
Of note, and called out way upthread, browser-like specifier behavior is pretty much out the window at this point, what with export/import map support.
I meant it as a short-hand for relative and absolute ("non-bare") URLs, specifically. There are currently no rules for resolving bare specifiers in the browser, so that's kind of out of scope (import maps aren't a final standard yet). And for relative and absolute ("non-bare") URLs, it's very much exactly what the browser does. Unless I'm missing some piece there?
The browser, absent import maps, doesn’t support bare specifiers at all, only URLs - and node’s resolution only overlaps with data URLs, and arguably relative file URLs (which are really paths, not URLs, despite file: being a URL protocol).
I very much agree that “browser-like” is never going to be fully obtainable unless a more fully featured proposal than the current import maps proposal ends up landing in browsers; and i strongly agree with wes’ comments above. What should have required consensus was deviating from require, and it’s highly unfortunate that those of us advocating for require-like behavior got put into a position where we were scary bad blockers, and where those advocating for no extension lookup were the “good guys” who just want to see ESM shipped.
This issue definitely shouldn’t be closed.
I meant it as a short-hand for relative and absolute ("non-bare") URLs, specifically. There are currently no rules for resolving bare specifiers in the browser,
The distinction doesn't matter as far as what tooling is needed to get an arbitrary node module running in the browser. If you know and control the source, then you can include all extensions and have a crossplat library (and could have done so even if extension searching was enabled), but if you _don't_, the distinction doesn't matter, as your libraries may use export maps, which will need tools to modify/bundle the maps and/or code (the complexity of which is largely immaterial, as those tools aren't every-library-on-the-platform).
Additionally, I keep saying this, but it's _not_ specified in the browser for relative specifiers - the browser just fetches whatever resource is provided at the URL by the webserver, extension or no (and provides an Accepted content type to the server). Take unpkg's webserver for instance - the webserver is clearly configured to return a specific js file for https://unpkg.com/react. I get that IIS only does default lookups for, like index.php and index.html by default, but it's configurable to do the same for js, too. Heck, that's part of the reason index.js is a thing! My point being, the _browser_ doesn't _require_ extensions either; it's not a _given_ that a relative import will have an extension! Yes, a dumb static file server will just serve things as they are, but that's not the _only_ way things work! You can't just say "browsers require extensions" because it's patently false. You can't even say "webservers require extensions" because that's not true either; at _best_ you can argue something like "common webhosts serve files only at extensions" and my argument is simply "why does node's default dx need to cater to the least common denominator of webhosts?". Why can node, since it is a specialized runtime for running js code, not provide _conveniences_ for the developers writing that code, should they choose to use them?
Like, node _requiring_ extensions on relative imports is strictly _less_ capable than what an arbitrary browser/webserver combo is capable of!
It seems to me that the real failure here wasn't in our inability to reach a consensus on this issue, as it's clear that there are diverging opinions that simply won't be reconciled; our failure was in not coming up with a definitive way to resolve this question. We created the --es-module-specifier-resolution flag with the expectation that user feedback would be clear enough and plentiful enough to tell us which way to go; but I think it's safe to say that that hasn't happened. So we're simply left at a stalemate, with the automatic-resolution proponents feeling embittered because they felt that they compromised in letting the release go forward without their preferred implementation, and then that ended up being apparently the final design as there's been no clear signal from the public that the will of our users matches what shipped.
Based on comments on this thread and elsewhere, and based on the one vote we took regarding unflagging without automatic extension resolution a few years ago, it seems clear to me that if we _had_ taken this to a vote, the current extensions-required implementation would have prevailed in a vote within the modules group. And the lack of a deluge of issues on the Node repo implies to me that the user base isn't too upset by the current implementation. So even if we didn't get here through a process we can all be happy with, it seems to me that the extensions-required implementation _is_ the preference of both a majority of the modules group and of the Node-using public. I think what would make me feel better about this conclusion, and would hopefully help the automatic-resolution proponents better accept it, would be more concrete indicators from our stakeholders (package authors, tool authors, general developers, etc.) that they agreed. The survey that @SMotaal tried to put together had the potential to give us this broader data, but that effort fell apart due to infighting within the group over what questions the survey should contain (because even deciding what to ask the public influences what answers you'll get). Ultimately this speaks to me of the dysfunction of this group: we can't even agree on a way to settle our disagreements.
So I guess where do we go from here? I think if the automatic-extension proponents are still interested in pursuing this, we _can_ try to find consensus on a way to settle this question definitively, such as through some kind of democratic process like a survey of users or a vote of the modules group or of all Node contributors or all Node groups. Speaking for myself, that's what I would need to convince me that I'm wrong and that this should be changed. Likewise, if such a vote or survey reaffirms the current behavior, I would expect the automatic side to accept the election results and move on.
And it's not for nothing that the situation today is much better than it was when this issue was opened. Not only does "exports" allow pretty specifiers, but loaders allow opt-in to the automatic behavior even without the --es-module-specifier-resolution flag. All we're really arguing over at this point is whether the automatic behavior should be enabled by default rather than opt-in by users.
So, if you want feedback, I _still_ get weekly pings asking me to modify TS to paper over the lack of extension searching, and to "automatically append" .js to the end of extensionless imports. It's anecdotal, sure, but I'm met with _very_ hostile responses every time I say "just include the .js in the import yourself, it works fine". There's a massive collection of tools growing to add these extensions at build time, because people just _don't want to start writing them_. Of course we never got any kind of feedback here or anywhere else, this is an issue tracker for a working group, not a feedback forum or survey. But the TS issue tracker is our feedback forum (for better or worse), and I've been getting the brunt of this for months.
I've been following that specific issue for a while, and I also expected import './file.ts' in a src/app.ts folder to become a import './file.js' in a dist folder since a compiler by its nature is a transform of the source, which can surely include the file extensions. Having ./file.js in a TypeScript file to refer to file.ts seems odd to me and many users, and is certainly unintuitive. But I wouldn't push the cause of that problem entirely on Node.js by any means, since it's very much a TS tooling specific problem in being unable to support outputExtension or an output extension mapping system. It's also a documentation / awareness issue in TypeScript by taking such an unintuitive stance without very widespread education on it.
given that this is not a correctness issue, I think it is somewhat inappropriate to take a prescriptive approach to this. if you have some moral objection to code written for nodejs not working in browser when served with a static web server then you should convince people of that, not force it on them.
I've been following that specific issue for a while, and I also expected import './file.ts' in a src/app.ts folder to become a import './file.js' in a dist folder since a compiler by its nature is a transform of the source, which can surely include the file extensions.
As I've explained countless times, that's a leaky transform, which is why we won't do it. We _do not_, generally, override the runtime resolver (like webpack or parcel may do); what you _see_ as your specifiers, is what you get in your output, and we're smart enough to know that a reference to ./a.js is a reference to the built output of ./a.ts. Now, we never had this question when people were able to elide extensions, since ./a could mean either, and people were OK with that, since it was the _preferred_ style. Moreover, since we can't realistically append .js (_or_ substitute .ts with .js) without bundling a runtime hook into your app that intercepts require (and potentially mucks it up for deno or ts-node), we couldn't find all such imports even if we tried, since imports have a dynamic component! So yes, there _are_ ways to bypass the _issue_ by complicating your runtime, the fact stands that people have an expectation for how the runtime should _already_ behave.
We're not even being asked to support the "new way" of resolving esm specifiers, because we _already do_, because it's a subset of the old way (sans exports)! People are asking us to be able to _write ./a in their source code, and have it mean ./a.js at runtime, like they want!_
@weswigham I fully understand the reasoning and I'm not by any means wanting to inform TypeScript what to do. I'm simply stating my opinion that seems to be shared by others that this behaviour could be seen as unintuitive and there are possible alternative designs that could be seen to be more intuitive. For example, treating relative specifiers only (as defined in HTML) as permitting extension mapping through configuration could be one mechanism. The reason I'm stating this opinion is only to counter your assertion that Node.js put TypeScript in this situation, when we must look at the fact that TypeScript is making design decisions here, and the blame cannot be directed to Node.js, when making those design decisions without adequate user education is the underlying reason for confusion.
We are _refraining_ from designing something to cover up what the runtime does _not_ do, but users still want. That is _very_ different than designing something in conflict with the runtime. The _runtime_ doesn't support any kind of extension mapping thing, nor does it have a great need to. We do not _want_ to be extending the resolver at compile time.
We do not want to be extending the resolver at compile time.
I must still admit I don't understand why. Also, isn't paths already an extension? TypeScript also has to integrate with the Node resolver for type lookups - is that not a resolver extension too?
Also, in theory a file mapping is not a resolver extension it is a file system remapping.
Let it be known that people who do not use typescript also dislike the lack of extension resolution.
I must still admit I don't understand why. Also, isn't paths already an extension?
.d.ts locations for .js files, since they (used to) often be held in a folder separate from normal dependencies; moreover this has no effects on paths actually _in_ your code, or on runtime behavior.amd modules, not cjs or esm; it's from a time where bower was still in active use. Some people use it nowadays for local monorepo development, because the local package linkages there can be very nonstandard, and the monorepo layout/tool in use may not know how to handle TS types on its own. (Though normally just building them into your packages just works) TypeScript also has to integrate with the Node resolver for type lookups - is that not a resolver extension too?
No, we reimplement the resolver wholesale; we run in browser runtimes where node's resolver isn't available, but we'll still analyze code intended for node. That's a small part of why we care so much, we have to maintain a parallel implementation of the thing, and not just the _current_ version of it, but most recent past versions, too (and then provide flags for old/new behavior, depending - that's why we've been trying to wait for all the new stuff exports to stabilize, so it can all be behind one setting; so you could consider that support "extensions", but it'd feel disingenuous to say caring more about back compat then node itself qualifies as such). The more differences there are between past and current versions, the more confused our users get.
Putting these two arguments aside, the main argument to consider is that there is a file system mapping happening. The module at file:///path/to/project/src/file.ts is being mapped into a module at file:///path/to/project/dist/file.js (or file.mjs). Because the extension of the file is changing, the linkage between the modules before and after the transform is changing. The invariant that holds in all module systems is that relative paths with a file extension are always supported. Thus a file with a .ts extension can be remapped to a file with a .js or .mjs extension as part of the file remapping process so long as it is a relative specifier. All of this happens without any resolver extension being necessary and is the same type of mapping performed by standard build tools like RollupJS and esbuild code splitting.
All of this happens without any resolver extension being necessary and is the same type of mapping performed by standard build tools like RollupJS and esbuild code splitting.
Hold it there; those tools bundle their own resolver into the bundle. They no longer care about extensions at all - webpack even goes so far as just just use IDs to refer to each module anywhere it's precalculated the links. Point is, we don't bundle a big runtime like that.
I specifically did not mention Webpack - I'm referring to code splitting outputs from RollupJS and esbuild, which only rely on relative specifiers to exact file extensions as the basic primitive (with externals of course).
--es-module-specifier-resolution flag is unlikely works. Is it okay we put another field in package.json for this ?
It seems to me that the real failure here wasn't in our inability to reach a consensus on this issue, as it's clear that there are diverging opinions that simply won't be reconciled; our failure was in not coming up with a definitive way to resolve this question. We created the --es-module-specifier-resolution flag with the expectation that user feedback would be clear enough and plentiful enough to tell us which way to go; but I think it's safe to say that that hasn't happened.
Because it is safe to say that nobody use native ESM in node. None of my projects at work or private can be converted. Most of the tooling lack proper support: berry, typescript, webpack, eslint, jest. Support is either partial or would require massive amount of work with little to no benefit. Lots of people have transpillers in theirs toolchain that compile ESM to CommonJS and are by large unaware. That why you do not see a lot of feedback. It is more visible in corresponding tools repositories where people are harassing maintainers.
I think many people like me just wait for things to "Just work". I am observing ESM progress in node for the last 3 years now from initial work in node-eps. This is moving incredibly slowly because of changes in resolution. You cannot expect adoption if a an migration is to change every relative import statement and every index.js. That why sooner or later all tools will be forced to support re-writing paths.
node.js was so late in the ESM party but want to change everybody assumptions about resolution.
I'm sorry, but I don't find any of these arguments persuasive; nor do I find this ongoing debate a productive use of our time. @ljharb at least opened this issue to try to collect feedback from across the web although it's been hijacked; and @rauschma opened a Twitter poll:

Excluding the “just show me the results” folks, that’s 39% for mandatory extensions, 30% for automatic resolution, and 31% that don’t care; from a sample of 294 votes.
I think efforts like this poll are the way forward to try to build support for any potential change, if one is desired.
people who prefer providing extensions can provide them regardless of what the default here is. what i see from that poll is 30% of devs saying they miss a functionality, that's pretty huge.
Seriously; if 30% of your users said they wanted something that didn't affect what the rest of your users could do, I don't know why you wouldn't consider it...
Excluding the “just show me the results” folks, that’s 39% for mandatory extensions, 30% for automatic resolution, and 31% that don’t care; from a sample of 294 votes.
Again, another poll that is just misleading. Workaround only applies to external dependencies. Not only you suggested workaround is bad because lead to fragmentation but do not address core issue.
As result your node.js application written in typescript can end up with something like this
import module form 'main-module-package'; // with single main
import cjsSubmodule from 'main-module-cjs/submodule' //cjs modules still search for extension ?
import submoduleMapping from 'es-module-package/submodule'; // with subpath mapping
import submoduleNoMapping from 'es-module-package/submodule.js'; // without subpath mapping
import base form '../base/index.js' // index.js as no longer supported
import local from './local-module.js' // relative imports need to have extension
Now in my typescript code I need to know how modules can be imported. For relative imports I need to import using transpilled extension and for existing code I need to refactor or import index.js.
Most helpful comment
The distinction doesn't matter as far as what tooling is needed to get an arbitrary
nodemodule running in the browser. If you know and control the source, then you can include all extensions and have a crossplat library (and could have done so even if extension searching was enabled), but if you _don't_, the distinction doesn't matter, as your libraries may useexportmaps, which will need tools to modify/bundle the maps and/or code (the complexity of which is largely immaterial, as those tools aren't every-library-on-the-platform).Additionally, I keep saying this, but it's _not_ specified in the browser for relative specifiers - the browser just fetches whatever resource is provided at the URL by the webserver, extension or no (and provides an
Accepted content type to the server). Takeunpkg's webserver for instance - the webserver is clearly configured to return a specificjsfile forhttps://unpkg.com/react. I get that IIS only does default lookups for, likeindex.phpandindex.htmlby default, but it's configurable to do the same forjs, too. Heck, that's part of the reasonindex.jsis a thing! My point being, the _browser_ doesn't _require_ extensions either; it's not a _given_ that a relative import will have an extension! Yes, a dumb static file server will just serve things as they are, but that's not the _only_ way things work! You can't just say "browsers require extensions" because it's patently false. You can't even say "webservers require extensions" because that's not true either; at _best_ you can argue something like "common webhosts serve files only at extensions" and my argument is simply "why does node's default dx need to cater to the least common denominator of webhosts?". Why can node, since it is a specialized runtime for running js code, not provide _conveniences_ for the developers writing that code, should they choose to use them?