.mjs
Input Fileshttps://github.com/microsoft/TypeScript/issues/27957
require
always does a CJS resolution.mjs
resolution, but error on that.import(...)
would...need to not be imported..mjs
files?.ts
files to .mjs
files because that'd be a massive breaking change.module
field?.mts
and .cts
?.mtsx
and .ctsx
?.d.mts
and .d.cts
?Take the following example
foo/package.json
foo/index.js (this is ESM)
foo/index.cjs (this is CJS)
require("foo")
breaks because it'll resolve to foo/index.js
import "foo"
doesn't work because it won't resolve to /index.js
because Node thinks it's too magical for ESM..mjs
in an import path is doable long-term, but not .mjs
output is troublesome.--emitExtensions
and Importing .ts
Fileshttps://github.com/microsoft/TypeScript/pull/35148
"moduleResolution": "deno
"?https://
paths?https://github.com/microsoft/TypeScript/pull/35206
.js
file.https://github.com/microsoft/TypeScript/pull/34713
We're not rewriting paths.
Not now, not tomorrow, not ever.
I have seen this several times but haven't seen details about it. Can someone help explain it for us plebs why this is "100% off the table, never to be considered, never to be discussed, never to be brought up"?
I suspect there is a good reason for this, but it would really help me accept it if I could read the discussion that lead to such a hardline stance. I understand it is hard, but given the other options it feels like we really should be exploring all possibilities fully, including hard ones.
We're not rewriting paths.
Not now, not tomorrow, not ever.
I have seen this several times but haven't seen details about it.
Agree, if not rewriting the extension name, then how to emit valid code for Node module resolution and deno?
I have seen this several times but haven't seen details about it. Can someone help explain it for us plebs why this is "100% off the table, never to be considered, never to be discussed, never to be brought up"?
Let's say you write some code in a JavaScript file
var x = y + z;
If you paste that code into a TypeScript file, it means
var x = y + z;
This the fundamental promise of TypeScript: You can take some JavaScript code, put it in a TypeScript file, and it means the same thing meant before. I would argue it is the most important design principle we have, because it's the only reason JS -> TS migration is possible and the only reason JS <-> TS interop is sane.
This is why we don't have extension methods, even though it would be nice.
This is why we don't have operator overloading, even though it would be nice.
This is why we don't have exceptions on missing property access, even though it would be nice.
This is why we don't have automatic binding of this
on class methods, even though it would be nice.
This is why we don't have native integers, even though it would be nice.
This is why we don't have exceptions on divide-by-zero, even though it would be nice.
This is why we don't have automatic prefixing of this.
on class members, even though it would be very nice.
We've kept that promise on literally any random JS snippet you can think of, and while some JS code may have type errors, it does the same thing at runtime it does before. Changing your file extension from .js
to .ts
will not break your program.
Now you can say "Well that promise sucks, you should break it and just rewrite paths because rewriting paths is the best possible thing a programming language can provide". OK, fine.
Our stance has always been that you should write the import path you want to appear in the emitted code, and add path mapping or other configuration to make TypeScript understand what that path should resolve to. This is 100.0% consistent with the idea that you should write the JavaScript you want to appear in the emitted code and add type annotations or assertions to make TypeScript understand what the type intent of the code is. In fact, it's not even a different principle at all, because an import
statement is JavaScript and we have already established that you should write the JavaScript you want to be emitted.
The thing I see every time is that people go through this cycle:
The right fix is to either:
TypeScript isn't here to provide a module aliasing system. If your loader supports such a system, great, use path mapping to describe it. TypeScript isn't obligated to invent new ways of adding epicycles to module resolution, and doing so would violate one of our core principles.
But again, let's say you think that rule is bad. What happens if we start rewriting import paths tomorrow?
Well, today, you write the import path that you want to appear in the emitted JavaScript file. You have two paths to consider here:
This is already insanely complicated. paths
, baseUrl
, baseUrls
, the module resolution setting, the location of the originating file, the effect of symlinks, file extension priorities, the fact that there are really three different kinds of paths each with their own independent resolution strategies, etc. etc..
Tomorrow, if you write an import path, you now have three things under consideration:
This is taking a complicated 2-dimensional math problem and moving it to 3 dimensions. People can already barely understand the interaction of include
, exclude
, and module imports - do we really want to make this the most complicated configuration system ever imagined? The user confusion will be unending.
That said, this is 0% about the difficulty of such a system. The existing resolution system is already difficult; we are not afraid of doing difficult things. We are opposed to doing things that violate our fundamental promise that the JS code you wrote is the JS code you get.
If you find yourself trying to write a valid runtime path, and can't get TypeScript to understand how to resolve that path to the target module, tell us about it. We have half a dozen module resolution flags and will keep adding more until any valid runtime path you write can be reasonably resolved to a corresponding file or declaration.
Conversely, don't tell us that you wrote an invalid runtime path and want us to fix it for you! This is the same as saying that you wrote var x = y + z;
and want TypeScript to emit something different because you really wanted some other JavaScript - that's not what we do, and not what we've ever done.
I got the principal now, and I propose a change in my --emitExtension PR at https://github.com/microsoft/TypeScript/pull/35148#issuecomment-564361133
Does it seems okay now?
Currently import * as React from 'react';
and import { PureComponent } from 'react';
as handled by default by TypeScript is not compatible.
Only import React from 'react';
is supported (the whole CJS module as default export).
Dynamic modules has not gained consensus within the working group.
Alternative solutions have not been accepted either.
esModuleInterop
can be enabled for users wishing to output ESM.
import * as React from 'react'
/import { PureComponent } from 'react';
is not treated as an error but will error in Node's implementation when targeting "module": "esnext"
.
Extensions are required, current auto-suggestions do not add extensions automatically and still report the code as OK which will definitely cause confusion.
Extensionless imports failed to gain consensus although discussion is still ongoing.
Authors can add extensions themselves for ES output.
Authors must take care to add another package.json
if they have multiple targets as .js
cannot be used for both targets.
package.json
exportsNew feature typescript doesn't understand. Basically a package can be published with a package.json like:
{
"name": "my-package",
"exports": {
"/foo": "./dist/foo.js"
}
}
And consumers can then do:
import foo from "my-package/foo";
New feature to support authors of packages to freely change their package structure without affecting consumers.
Only import foo from './node_modules/my-package/dist/foo.js'
works for now.
This is different to actual supported "my-package/foo.js"
, feature simply can't be used without TS support.
.mjs
/.cjs
/"type": "module"
Currently TypeScript uses existence of import
/export
to distinguish modules and comnmonjs.
Node however uses package.json "type": "module" | "commonjs"
to determine whether or not .js
is CJS or ESM. .cjs
and .mjs
are always treated as CJS and ESM respectively regardless of "type"
.
Some already existing modules cannot be used in TypeScript because of this (e.g. idlize).
Automatic detection was rejected due to various hazards. I can't find the actual reference for when this decision was finalized.
Instead dual packages will be replaced with (the currently experimental) conditional exports if no require(esm)
or other such solution is resolved by January 2020.
There's no compromise if author's want to use .mjs
/.cjs
authored packages. Currently import IdleValue from 'idlize/IdleValue.mjs'
is simply not supported.
This is already insanely complicated. paths, baseUrl, baseUrls, the module resolution setting, the location of the originating file, the effect of symlinks, file extension priorities, the fact that there are really three different kinds of paths each with their own independent resolution strategies, etc. etc..
I have never even considered using this features for this reason. However existing complexity doesn't mean the alternative system increases complexity. It may be the case that within roots that have a tsconfig.json
with such a feature they cannot use baseUrl
/baseUrls
/paths
/etc.
Speaking for myself, all I really want a is a tool that turns a graph like this:
Into one where the .ts
files are re-mapped to whatever format I want.
Maybe I want modules:
Any maybe I want CommonJS with .cjs
to sit alongside side the other modul graph:
In these graphs the specifier always points to the type of resource I want, but what I want .ts
to be converted to might vary depending on target. Relying on ./noextension
doesn't even work if those output graphs go into the same directory like so:
In such a graph ./a
resolves to multiple but given a resolution order it'll always resolve to one of them. So simply outputting the specifier with no transform doesn't work.
I can't find the actual reference for when this decision was finalized.
Because it hasn't been. There are just individuals with opinions - I'm one of them.
Because it hasn't been. There are just individuals with opinions - I'm one of them.
Well some agreement must've been achieved to unflag the current implementation (even if it is still experimental), this seems like a pretty strong signal that the core implementation of modules is nearing readiness even if there's still rough edges.
Also the problems will still be present even with auto detection, as TypeScript will still need to learn to understand the extensions if it wants to be able to support importing packages that use .mjs
.
Although I do think this is one of the lesser issues compared to the others I mentioned as TypeScript authors can always decide just to publish non-mixed types for the time being.
It is how it is because it's easier to add extension resolution back in than to remove it. That's what we could agree on.
In the Node runtime, as a CommonJS consumer, you can't interact with ESM at all.
Just to clarify: You can't require
ESM but you can import()
ESM using dynamic import.
I got the principal now, and I propose a change in my --emitExtension PR at #35148 (comment)
Does it seems okay now?
I have fully rewritten the pr, for any one interested, see https://github.com/microsoft/TypeScript/pull/35148
Just saw that PnP had been discussed:
What are the problems with pnp?
- Executes arbitrary
.js
file.- The defaults make little sense in the editor - means that users need to configure their editors to load tsserver with yarn. Very opt-in.
- Also, while guidance is to stick to one version of package manager, users sometimes mix/match package managers and different versions.
- Kind of hard to be sure what doing the right thing means here that doesn't require the same level of configuration as an LS plugin that overrides resolution
I'm curious if there are suggestions you have regarding our design that would make it easier to reach a solution (outside of plugins - I mean default support)?
Apologies for discovering this issue so late, and for my limited understanding of TypeScriptâs constraints regarding this issue. Back in 2016 I added support for import
and export
to CoffeeScript, and we took essentially a âpassthroughâ approach: CoffeeScript code of import 'foo'
was output as JavaScript code of import 'foo'
, and it was the responsibility of some other tool in the build chain to convert that import 'foo'
into require
statements or whatever else the user wanted.
For an intended runtime where ES modules are supported, like modern browsers, this works great; the author just needs to write browser-compatible import
statements using URLs. For Node 13+ with native ESM, the author needs to write the URLs that Node would run, e.g. import './file.js'
, and set package.json
"type": "module"
or set up another build step to rename .js
files to .mjs
.
I gather from the discussion here that lots of people prefer to have TypeScript be their only build step, so a secondary step for renaming files is undesirable. Has it been considered that the TypeScript compiler have an option to just output import
and export
statements more or less as written? And then users can follow the pattern I just described, where "type": "module"
is set and the original TypeScript contains Node-runnable statements like import './file.js'
. Is the issue that TypeScript itself needs to follow the URL in order to load the imported file and get its types, and therefore it needs to file.ts
instead of file.js
?
The conflict stems from TypeScript assuming that when you write TS code, the code will target a single runtime environment and you will know the file names/paths at dev time. Many people want a workflow that doesn't know file names/paths until compile time, and the introduction of NodeJS's .mjs
filename requirement together with browser's need for explicit full file paths makes it so now users have to choose at _dev time_ what runtime they want to target, rather than waiting until compile time to make the decision (at which point you can do things like multiple builds).
the introduction of NodeJS's
.mjs
filename requirement
In the final implementation, Node supports ES modules that use .js
. Does that not help this issue?
In the final implementation, Node supports ES modules that use .js.
I haven't been following this aspect of NodeJS closely, but that statement doesn't align with the comments at the top of this issue (that mjs extension is required).
If NodJS does go with .js
extensions for all files (no .mjs
) then that mitigates the immediate problem, but still leaves the more general problem that it requires dev-time assumptions about the final output format. For example, imagine a runtime that executes TS natively, or imagine TS compiled down to CLR or JVM bytecode instead of JavaScript. In any of these cases, it would be incorrect to put a .js
extension on the module import line because the extension isn't known until compiletime. These tools could all probably be hacked to do compiletime or runtime translation of the .js extension to something else, but I'm not a fan of forcing tooling authors to put hacks in.
I have tried to rewrite extension at compile time in my pr mentioned above (--emitExtension) but typescript team does not accept it.
I haven't been following this aspect of NodeJS closely, but that statement doesn't align with the comments at the top of this issue (that mjs extension is required).
Use the "type": "module"
field in your package.json
.
I haven't been following this aspect of NodeJS closely, but that statement doesn't align with the comments at the top of this issue (that mjs extension is required).
Iâm on the modules team for Node.js. There was an earlier experimental ES modules implementation in Node 7 through 11 that required .mjs
, but since 12.0.0 the current implementation lets users use either .mjs
or .js
. To use .js
, users need to add "type": "module"
to their package.json
: https://nodejs.org/api/esm.html#esm_enabling.
This was added specifically to support file types such as TypeScript and CoffeeScript and JSX that have their own extensions (.ts
., .coffee
, .jsx
, etc.) and compile to JavaScript. Node considers both methods (.mjs
or "type"
) to be equally first-class and fully supported, so you shouldnât feel like you _have_ to support .mjs
if doing so is a burden. Obviously ES modules in .js
will require a package.json
for the project, so some users might want to be able to output as .mjs
in order to save shell script-like files written in TypeScript; but that seems like a minority use case for TypeScript users. There will probably be other build tools that prefer or require .mjs
eventually, but you can tackle those cases when they arise.
What issues, if any, are there with TypeScript outputting ES module .js
files that Node can run?
What issues, if any, are there with TypeScript outputting ES module .js files that Node can run?
@GeoffreyBooth Given what you said above about latest NodeJS, it sounds like "type": "module"
is the solution for writing a library that works for both browser and NodeJS, though you'll still either need to write in the extension at dev-time (not a problem if browser and node are your only runtimes) or use a TSC transformer, or a post-build transformer, or use a runtime that can do runtime path transformations.
An example of where this is problematic is ts-node, which runs on NodeJS and hooks into the NodeJS module loader. Unfortunately, the NodeJS module loader doesn't callback to external module loaders if the path has a .js
extension, so if you write import from './foo.js'
ts-node never gets the opportunity to load ./foo.ts
so the program will not run. ts-node can work around this by doing path transformations at compilation (which is Just In Time) but that adds a lot of complexity to what is currently just a NodeJS module loader (now it needs to hook in to the TypeScript emit process to change what is emitted).
NodeJS could give external module loaders the ability to hook into JS file module loading (an opportunity to do path transformations) which would solve the immediate problem for ts-node (this may be something you care about) but it doesn't solve the broader issue of making TypeScript (the language) agnostic to runtime environment.
So ts-node
is a whole separate issue, in that itâs currently hooking into Nodeâs CommonJS loader but now thereâs a separate ES module loader; if it wants to support both CommonJS and ES module files then ts-node
will need to provide two loaders. Weâre still designing an API for hooks for Nodeâs ES module loader. Transpilation is one of the core use cases, and is already possible via the hooks just merged onto master
: https://github.com/nodejs/node/blob/master/doc/api/esm.md#transpiler-loader. Keep in mind that this is very experimental at the moment. If I were ts-node
Iâd give that work more time to bake before Iâd add support for ES modules.
it doesn't solve the broader issue of making TypeScript (the language) agnostic to runtime environment.
I donât follow. If the user can write import './file.js'
in their original TypeScript, and TypeScript will output that unmodified, and Node can run it under "type": "module"
(as could any other ESM-supporting environment, from browsers to Deno); then how is this not agnostic to runtime environment?
This gets back to my earlier question. _Can_ a user just write import './file.js'
in their original TypeScript, when the only file on disk is file.ts
? Will that screw things up in TypeScriptâs compilation and type checking, where it tries to find a file.js
and canât?
Can a user just write import './file.js' in their original TypeScript, when the only file on disk is file.ts? Will that screw things up in TypeScriptâs compilation and type checking, where it tries to find a file.js and canât?
Yes. If you are targeting only browser and NodeJS this is the commonly recommended strategy. If you are targeting things besides those two (like ts-node or some hypothetical future non-JS runtime) then this strategy doesn't work (without hacks) because your emitted files may not have a .js
extension. The TypeScript compiler will _guess_ that when you say import from './foo.js'
that you _probably_ mean import './foo.ts'
for the sake of type definition lookup. This is problematic if you have a foo.js/foo.d.ts and a different foo.ts on disk that doesn't match, but I'll acknowledge that you probably deserve pain and suffering if you do that. đ (joking aside, it _is_ common to have index.html, index.css, index.js, so it isn't totally crazy to imagine extension being the only differentiator between files).
For an example, one can imagine a runtime that runs TS natively (no transpilation to JS). If you were to do import from './foo.js'
that wouldn't make sense because there is never a JS file, neither on disk or in memory. The runtime could have hacks that say "if user imports a JS file, ignore what they said and look for a TS file instead", but then what happens when your TS native runtime wants to support loading actual JS files (e.g., for backward compatibility support)? You can introduce more hacks like looking for the file on disk to see which exists... but that doesn't work in environments like browser where checking for existence is expensive.
Currently, TS is agnostic to the target language/file types. At dev time you don't have to make a decision on target runtime, you can put that decision off until compile time or runtime. Requiring that users explicitly provide an extension at dev time breaks that agnosticism and pushes TS toward gnostic language (coupled with the runtime target). This breakage of agnosticism is what I dislike about the current proposed path forward on this issue.
FWIW, I do appreciate that MS has made their stance on this clear, and I respect that they disagree as to the value of runtime agnosticism.
So I donât consider ts-node
a âruntime,â in the same way that CoffeeScriptâs coffee
command or Babelâs babel
arenât runtimes. They all just transpile files on the fly before passing them to Node for execution. Node is the runtime, in my opinion. I know it gets a little blurry since Node itself wraps V8, but I donât commonly see ts-node
and coffee
etc. grouped along with âtrueâ runtimes like Node and Deno and browsers.
It seems to me that if the TypeScript compiler itself can see file.js
and know to look for file.ts
instead, or as a fallback if file.js
doesnât exist, then ts-node
could do the same. That would seem to me the simplest solution, and the most âcorrect.â Allowing the user to type import './file'
or import './file.ts'
is a bit magical, as ârealâ runtimes like Node or browsers wonât support such specifiers (at least without special configuration like a loader for Node or an unusually configured server for browsers). I would expect that TypeScript authors understand that their code isnât actually running as TypeScript, but is becoming JavaScript first and being evaluated as that.
code isnât actually running as TypeScript, but is becoming JavaScript first and being evaluated as that
If you believe TypeScript should only ever compile to JavaScript and will never be executed natively (without transpilation) and will never compile to anything other than JS (e.g., WASM, native, etc.) then specifying .js
extensions in your import statements at dev time is reasonable. If you think that at some point in the future TS will target runtimes other than JS, then specifying the .js
extension at dev time is not as reasonable of an option because you may not know the runtime target until compile time.
As an example, Kotlin can compile to JVM bytecode or JavaScript. It is reasonable to write a Kotlin library that isn't runtime specific. If Kotlin required you to specify the file extension of relative imports, you would not be able to author code that worked with either runtime... it would change what is now a compile time decision (runtime target) to a dev time decision.
If you believe TypeScript should only ever compile to JavaScript and will never be executed natively (without transpilation)
What exactly is âexecuted natively (without transpilation),â and are you sure you want it? Consider a theoretical version of V8 that executes native TypeScript instead of native JavaScript. Either it needs to do type checking at runtime, and therefore by definition be slower than V8 for equivalent JavaScript; or it ignores types, and effectively is transpiling TypeScript to JavaScript before evaluating it (or semi-transpiling it, by removing/ignoring the type definitions). Is it ârealâ TypeScript if it doesnât check types when evaluating .ts
files? Since this isnât Java and we donât have a compile stage here, since weâre âexecuting natively.â Or if you _do_ have a compile stage, well, that stage is creating output JavaScript and weâre back to where we are now.
I think in practical terms the closest weâre likely to get to a seemingly-native TypeScript runtime is Deno, where the TypeScript compiler is built into Deno and it transpiles on the fly before executing the output JavaScript. The advantage of transpiling is that then you can execute standard JavaScript, and there are JavaScript engines like V8 that are hyperoptimized for that. Itâs quite a burden to maintain your own engine like V8, just as itâs a major investment to build your own browser from the ground up (just ask the Microsoft Edge team). I think realistically any TypeScript âruntimeâ is going to build on top of existing JavaScript engines.
None of this means that TypeScript needs to require .js
in import
specifiers. That was just the solution I proposed because I thought it would work today, with both TypeScript and Node as is. I think if you want to provide users the appearance of a ânativeâ runtime, regardless of what actually happens under the hood, TypeScript should expect .ts
extensions in its import
specifiers, the way Deno does. The TypeScript compiler would look at its configuration to know to convert .ts
extensions in import
statements to either .js
or .mjs
depending on the file extension being used for the output files that the compiler is saving (so import './file.ts'
becomes import './file.js'
if file.ts
is itself being transpiled into file.js
).
The TS authors are trying to justify outputting invalid JS. Nice read!
@timonson there is no invalid specifier according to JS; the js spec doesnât govern it. TS is not outputting anything invalid here, no matter what string they choose.
@ljharb OK, I stand corrected. But it doesn't really change what I think about this design choice. People must spend a lot of time with unnecessary complexity because a main principle in design - simplicity - was neglected. What do I consider simple in this regard? Well, we are using URLs for almost everything very successfully for decades. It is a simple and complete string. We connect through URLs to just everything from everywhere. Now we have non-compatible ES Modules everywhere although the basic concept of ES Modules is amazing. I guess this is why we can't have nice things.
And by the way I love TypeScript and admire the authors, too. I just hate this specific design choice.
Nobody in node has been using URLs for modules for the entire decade itâs existed.
TS is not outputting anything invalid here, no matter what string they choose.
@ljharb I weakly disagree with this statement. TypeScript knows that it will be emitting ./apple.ts
as ./apple.js
, and it knows that you are doing import { ... } from './apple'
. TypeScript has all of the information required to identify that the resulting emitted files will not work in any browser. TypeScript also has all of the information necessary to fix this for you by appending the extension that TypeScript will be appending itself to the emitted filename.
To put it another way, I didn't choose to emit a file named apple.js
when I wrote a file named apple.ts
, TypeScript made that choice (and maybe some day in the future it will choose to emit apple.mjs
instead per a configuration option). As the author of the file, I may not even know what the final emitted file extension will be (especially true if we end up with configuration options to emit .mjs
as you may have different tsconfig.json
files for different builds), but TypeScript compiler does have this information available to it at emit time.
But after all, ts team won't do the path rewrite
We're not rewriting paths.
Not now, not tomorrow, not ever.
To elaborate: part of the reason we will never rewrite paths because we can't reliably rewrite _all_ paths - in addition to the ones in static import statements, _dynamic_ imports can have dynamic arguments that we couldn't possibly remap in every case. So our stance is to never perform any remapping or editing, so you can predict that what you wrote is what's in the output, without any obscure rules governing when it does or does not work.
@weswigham What about specifically just rewriting extensions, especially if this is a feature that is off by default but can be opted into via configuration? I feel like TypeScript _could_ reliably rewrite extensions of files that it knows exist, for statically defined specifiers. That's a much smaller scope than 'rewriting paths,' and it would solve this use case.
So for someone who really wants an extension rewrite today, I have a typescript custom transformer for it.
https://github.com/Jack-Works/ttypescript-browser-like-import-transformer
@weswigham you could wrap all dynamic import paths in a helper that does the rewriting at runtime?
you could wrap all dynamic import paths in a helper that does the rewriting at runtime?
Such a large generated runtime component is something we're not interested in. We've tried very hard to keep our helpers minimal we do not provide a runtime, per sey, and don't provide full polyfills ourselves. _if_ you wanted that runtime behavior, you'd need to provide it yourself. Plus, such a helper is inherently falliable in the presence of edge case files with strange chains of extensions like "file.js.ts", when used with cjs style resolution. :(
@weswigham you could wrap all dynamic import paths in a helper that does the rewriting at runtime?
(My custom transformer did that)
I think for dynamic import()
specifiers, the author has opted into needing to know what the hell they're doing, and will need to have figured out what the specifiers will be at runtime and will generate the specifier accordingly. This is also an edge case. I think TypeScript can certainly say that they only rewrite extensions and only for static specifiers. @weswigham?
The way to be sure that there isn't some CommonJS trickery going on like file.js.ts
is to check the disk when rewriting: if './file.ts'
_exists,_ and the configuration says that file.ts
will be output as file.js
, then rewrite the './file.ts'
specifier to './file.js'
. Don't rewrite any extensions otherwise.
This is also an edge case
The edges of a system are what define its limits.
You do realize that, under a scheme like that, resolving that specifier went from "it does what it does based on cjs" to "that, except for these cases, except in these scenarios" right? It's a minefield. Especially when you consider cross-project references, and references to (declaration) files that are just straight up are not in the current build. Hell, the declaration file for a file doesn't even encode the original source file's extension in any way (it could be tsx (and therefore jsx) or TS or js or jsx)- it's just associated by location.
This is a minefield we will not enter.
"it does what it does based on cjs"
But this is the problem. CommonJS-style resolution is a Node-specific thing. TypeScript isn't and shouldn't be defined by Node.
If it were me, I would create a new configuration option called ârewrite extensions of static imports of transpiled filesâ that does just that: when the transpiler encounters a static specifier of a file that the transpiler itself will be transpiling later (or has already transpiled), that specifier is rewritten. It's that straightforward, and it's not a minefield. Make it an experimental option if you want, and see what edge cases it can't cover. If too many people find too many cases where it doesn't rewrite as they'd expect, drop it. But I'm not persuaded that it's unachievable, or that the definition of when something will be rewritten is so difficult to fathom that users can't grasp it.
I'm not persuaded that it's unachievable, or that the definition of when something will be rewritten is so difficult to fathom that users can't grasp it.
We added an option called exclude
that filters what include
picks up; with regularity someone shows up and literally insults our intelligence for making this setting not affect the totally unrelated process of module resolution.
A lot of our emit happens through ts.transpileModule
which works on one file at a time; all of our regrets are about how this function can't handle all TS programs.
What is a file extension, anyway? Is it unfathomable to see a repo with filenames parser.ts
, parser.json.ts
, and parser.js.ts
? What does an import of parser.js
or parser.json
mean in that setting? Is the answer so obvious that no one could ever be surprised? Can you design this in a way that adding a new file to disk never changes the output of "unrelated" files? How does this work in hosted scenarios where TS can't see what's on disk?
The current rule could not possibly be simpler, and cannot possibly break a working program. You'd have to convince us that there's a huge pot of gold on the other side of the rainbow to give that up.
What is a file extension, anyway?
the extension is always and only the final piece (x.slice(x.lastIndexOf('.'))
, ignoring extensionless files for this pseudocode); that's how node operates. afaik only rails via sprockets has the convention that in .a.b
, both the a and b matter
.d.ts
(âಠ_ಠ)â彥âťââť
I want to repeat this because it is such a strong point:
A lot of our emit happens through ts.transpileModule which works on one file at a time; all of our regrets are about how this function can't handle all TS programs.
This is the thing - a mode where you always apply a file extension change is wrong because it can't resolve. A program that resolves to determine emit on extensions is fundamentally incompatible with transpileModule
and all the single-file-at-a-time JS compilers out there like Babel.
And in either of these modes, "what if the .d.ts
files on disk lie?" is a question that people keep hand-waving away. If an import of ./foo
resolves to ./foo.d.ts
, but is actually ./foo/index.js
at runtime, you're hosed either way. You can say "well that .d.ts
should reflect the real state of the world, but that's exactly how .d.ts
files are often written.
A lot of our emit happens through ts.transpileModule which works on one file at a time
Forgive my ignorance, but how do you determine what files to transpile? I assume there's some configuration to define something like âinclude ./src/**/*.ts
,â right?
So basically, step 1 is to identify all the files that match that glob; and then step 2 is to transpile each of them. Since in step 1 you have the full list of all files that will be transpiled, in step 2 you can pass that list into transpileModule
so it knows what files are on the list for every specifier it encounters, so it can rewrite those extensions _and those extensions only._ The option which enables this should be explicit that it's only rewriting extensions _for files that TypeScript is transpiling,_ to ignore cases of import
statements of non-local files or external libraries.
I understand the reluctance to fix something that isn't broken, in TypeScript maintainers' view, but browsers and Deno will never support CommonJS-style implicit extensions and it's not likely that Node will either (for ESM). If that's what TypeScript continues to insist upon, it will increasingly feel like a bug to users. Maybe that's not a âpot of gold,â but I think good UX is worth striving for.
@GeoffreyBooth
Since in step 1 you have the full list of all files that will be transpiled
This is what we're saying is not the case. transpileModule
shouldn't have to hit the disk or resolve at all. The model that most tools like bundlers, Babel, and others operate under is that files only have the local context, nothing more. transpileModule
is a concrete implementation of a model that's prevalent in the ecosystem.
Forgive my ignorance, but how do you determine what files to transpile? I assume there's some configuration to define something like âinclude
./src/**/*.ts
,â right?
The option which enables this should be explicit that it's only rewriting extensions _for files that TypeScript is transpiling,_ to ignore cases of
import
statements of non-local files or external libraries.
include
and files
specify the entry points, through which we find the the transitive closure of files to be compiled via things like transitive imports. But trying to say "this flag explicitly doesn't work the way TypeScript works in other cases" is what makes Ryan's statement even more pertinent.
We added an option called
exclude
that filters whatinclude
picks up; with regularity someone shows up and literally insults our intelligence for making this setting not affect the totally unrelated process of module resolution.
browsers and Deno will never support CommonJS-style implicit extensions and it's not likely that Node will either (for ESM). If that's what TypeScript continues to insist upon
Again, nobody's insisting on this. You can add a .js
to the end of your import paths and it will work!
You can add a
.js
to the end of your import paths and it will work!
For Node, but for IDEs? Is Code going to start seeing import './file.js'
and look for file.ts
there? And linters etc.? Even if the whole ecosystem adapts to this, it's bad UX to ask users to write paths to files that don't exist. That includes import './file'
for non-CommonJS contexts.
I understand that changing the model of transpileModule
, even if only when this option is enabled, is a painful change to make with ecosystem repercussions. But that seems like pain for the TypeScript development team and possibly authors of integrations like Babel's TypeScript plugin, but little if any pain for end users.
For Node, but for IDEs? Is Code going to start seeing
import './file.js'
and look forfile.ts
there?
Yes because TypeScript is literally the thing that powers the TS/JS IDE functionality in VS Code.
And if you have TypeScript editor functionality that isn't powered by...TypeScript, well, resolving that way is how TypeScript works so you'd have a bug if it wasn't implemented that way.
// <root>/apple.ts
export const apple = 'apple'
// <root>/index.ts
import { apple } from './apple'
console.log(apple)
Am I incorrect in assuming that when transpileModule
is called for index.ts
it will go to disk and find ./apple
(which it will find as <root>/apple.ts
) prior to emitting index.js
? If that assumption is correct, then I think the argument being made is that if that specific resolution for apple
, encountered while compiling index.ts
, resolves to a TS file, then, and only then, would index.js be emitted as:
import { apple } from './apple.js'
console.log(apple)
If that assumption is incorrect, how does transpileModule
manage to actually type check anything since the contents of ./apple
are required for type checking index.ts
?
As a user, the scenario above is really the only time I want TS to rewrite imports for me. It is in this situation specifically that TS has more information than I do as the developer, and thus can more accurately figure out what the correct import URL should be. If I am importing something that resolves to a .d.ts
it means that the extension of the runtime import is known or knowable to me at dev time, and I can correctly do import { banana } from './banana.mjs'
or import { banana } from './banana.js
or whatever.
The problem with writing import { apple } from './apple.js'
when there is no JS on disk is that I don't yet know what the final extension on disk will be, because TS hasn't written it yet! Will TS write .mjs
or .js
or no extension or will this be run in ts-node where it will never write to disk and instead will import .ts
? This is a question I cannot answer at dev time if I am using multiple tsconfig.json
files and/or running inside ts-node or deno or whatever environment.
huge pot of gold on the other side of the rainbow
@RyanCavanaugh the pot of gold is the possibility to download any TS module from any place in the world via URL without any dependency like npm
, rollup
and so on. Maybe even directly into the browser at one point.
Don't you have the vision for the near future where the compilation of TS happens in a background process in a 1/1000s? Well, I am not sure if this will ever be doable this fast, but even the possibility looks like gold to me!
Having individual file emit depend on disk/internet content would be a big barrier to that goal, not something that helps it in any way.
How @weswigham? You can cache content.
That's still miles worse than not needing to fetch the content at all. "Just cache it" does not magically make it free, it just amoritizes the cost over many similar requests.
Requiring dependency information to solve emit is just as bad as type directed emit, imo. And I can talk from experience here, as while we take the position that we don't rewrite import specifiers and always have, we _do_ rewrite triple slash references (which are largely a legacy feature, which modules should never need to use) to referenced declaration files in declaration emit, and it is the _biggest_ pain, and has resulted in _hundreds_ of bug reports (many of them performance related!), and inspired features like types
directives expressly designed to _avoid_ it (which are mostly all that's used in DT nowadays), since it turns out there's absolutely no real world intuition as to what the compiler should (or should not) do to fix up a path, despite what anyone in this thread has said or dreamed up, since in the real world, output environments are actually hideously complicated. We have literally received conflicting bug reports asking for directly opposed behavior with respect to triple slash reference paths. There is no good behavior here. Attempting to edit paths at compile time should be avoided _at all costs_.
I wished there was an emoticon expressing thank you so that I would not have to waste space in this thread thanking you @weswigham. So let me thank you for this great reply and the very good reasons (although my stupid intuition is still disagreeing with you) nevertheless in this way.
So what about my initial PR content? Don't smartly try to resolve the correct file extension, add it dumbly and tell the developer the rule of adding trailing extension. Developer should ensure the correctness by themselves
@Jack-Works I think the overall sentiment is that always applying the extension change is likely to still have its own share of surprises which we're not convinced would be worth it:
This is the thing - a mode where you always apply a file extension change is wrong because it can't resolve.
What is a file extension, anyway? Is it unfathomable to see a repo with filenames
parser.ts
,parser.json.ts
, andparser.js.ts
? What does an import ofparser.js
orparser.json
mean in that setting? Is the answer so obvious that no one could ever be surprised? Can you design this in a way that adding a new file to disk never changes the output of "unrelated" files? How does this work in hosted scenarios where TS can't see what's on disk?The current rule could not possibly be simpler, and cannot possibly break a working program. You'd have to convince us that there's a huge pot of gold on the other side of the rainbow to give that up.
Most helpful comment
Let's say you write some code in a JavaScript file
If you paste that code into a TypeScript file, it means
This the fundamental promise of TypeScript: You can take some JavaScript code, put it in a TypeScript file, and it means the same thing meant before. I would argue it is the most important design principle we have, because it's the only reason JS -> TS migration is possible and the only reason JS <-> TS interop is sane.
This is why we don't have extension methods, even though it would be nice.
This is why we don't have operator overloading, even though it would be nice.
This is why we don't have exceptions on missing property access, even though it would be nice.
This is why we don't have automatic binding of
this
on class methods, even though it would be nice.This is why we don't have native integers, even though it would be nice.
This is why we don't have exceptions on divide-by-zero, even though it would be nice.
This is why we don't have automatic prefixing of
this.
on class members, even though it would be very nice.We've kept that promise on literally any random JS snippet you can think of, and while some JS code may have type errors, it does the same thing at runtime it does before. Changing your file extension from
.js
to.ts
will not break your program.Now you can say "Well that promise sucks, you should break it and just rewrite paths because rewriting paths is the best possible thing a programming language can provide". OK, fine.
Our stance has always been that you should write the import path you want to appear in the emitted code, and add path mapping or other configuration to make TypeScript understand what that path should resolve to. This is 100.0% consistent with the idea that you should write the JavaScript you want to appear in the emitted code and add type annotations or assertions to make TypeScript understand what the type intent of the code is. In fact, it's not even a different principle at all, because an
import
statement is JavaScript and we have already established that you should write the JavaScript you want to be emitted.The thing I see every time is that people go through this cycle:
The right fix is to either:
TypeScript isn't here to provide a module aliasing system. If your loader supports such a system, great, use path mapping to describe it. TypeScript isn't obligated to invent new ways of adding epicycles to module resolution, and doing so would violate one of our core principles.
But again, let's say you think that rule is bad. What happens if we start rewriting import paths tomorrow?
Well, today, you write the import path that you want to appear in the emitted JavaScript file. You have two paths to consider here:
This is already insanely complicated.
paths
,baseUrl
,baseUrls
, the module resolution setting, the location of the originating file, the effect of symlinks, file extension priorities, the fact that there are really three different kinds of paths each with their own independent resolution strategies, etc. etc..Tomorrow, if you write an import path, you now have three things under consideration:
This is taking a complicated 2-dimensional math problem and moving it to 3 dimensions. People can already barely understand the interaction of
include
,exclude
, and module imports - do we really want to make this the most complicated configuration system ever imagined? The user confusion will be unending.That said, this is 0% about the difficulty of such a system. The existing resolution system is already difficult; we are not afraid of doing difficult things. We are opposed to doing things that violate our fundamental promise that the JS code you wrote is the JS code you get.
If you find yourself trying to write a valid runtime path, and can't get TypeScript to understand how to resolve that path to the target module, tell us about it. We have half a dozen module resolution flags and will keep adding more until any valid runtime path you write can be reasonably resolved to a corresponding file or declaration.
Conversely, don't tell us that you wrote an invalid runtime path and want us to fix it for you! This is the same as saying that you wrote
var x = y + z;
and want TypeScript to emit something different because you really wanted some other JavaScript - that's not what we do, and not what we've ever done.