TypeScript Version: 3.4.0
Search Terms: slow styled-components
Code
import styled from "styled-components";
declare const arr: TemplateStringsArray;
const k = styled.div(arr);
Expected behavior: Type instantiation count < ~10k, check time < 0.1s
Actual behavior: Instantiation count ~47k, check time ~2.5s
tsconfig.json
{
"compilerOptions": {
/* Basic Options */
"target": "esnext", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
"skipLibCheck": true,
"strict": true, /* Enable all strict type-checking options. */
}
}
Investigation notes
|Version|Type Count|Check Time|Notes|
|-------|----------|----------|-----|
|3.3.4|1425|0.04s|--|
|a9ad94ab3~1
|1425|0.02s|--|
|a9ad94ab3
(#29437)|8432|0.39s|6x type count, 20x check time|
|b7881a2~1
|8432|0.42s|--|
|b7881a2
(#30592)|59173|2.74s|7x type count, 7x check time|
|3.4.2|59173|2.61s|--|
|3.5-dev.20190407|47271|0.85s|20% type reduction, 68% check time improvement|
|---------|---|--|----|
|3.4 w/ DT patch|477|0.02s|https://github.com/DefinitelyTyped/DefinitelyTyped/pull/34564|
So, the change that introduced the perf issue is a change that caused us to instantiate the trueType
of a conditional a bit more eagerly (previously in that example we wouldn't have bothered, unlike in many others). This in and of itself shouldn't be an issue - the instantiation should not be anywhere near this expensive (and it is cached, so the effect on a single-usage example like this is outsized). All that'd be needed was a slightly different argument type before and we'd have been off to the races all the same. So the question that leads to is this: "Why is that instantiation expensive?".
From the declaration side, it's obvious that the instantiation becomes expensive because JSX.IntrinsicElements
has a lot of members and each of those members have a lot of props - cutting down on these or avoiding mentioning them in the types helps. While true, this observation isn't useful - we _want_ to be able to represent all these components and props.
From the implementation side, these types are expensive because of how we handle _distribution_. We have this thing called a "distributive constraint" - there's a number of issues open W.R.T. it's unsoundness, but on _top_ of that, the distributive constraint works as a constraint-based work multiplier. Rather than lazily deferring work, the distributive constraint eagerly partially evaluates conditionals in an attempt to "narrow" them while looking at constraints. It does this very inefficiently - every instantiation has to go back to the root conditional and re-instantiate and re-narrow; there is no "progressive instantiation" as most other types (ie, indexed accesses) have (instead the mapper is used as a transform which we we apply over and over onto a different base checkType
). This process ends up being hideously expensive for distributions over large unions (as we repeat large chunks of the instantiation work up to that point for every union member). Similarly, within getLowerBoundOfKeyType
we have getLowerBoundOfConditionalType
which has _the same problem_ - it takes the check type to its constraint and reevaluates the conditional, causing a bunch of work to be kicked off (this becomes the biggest issue once you remove the distributive constraint from the picture).
In short, the pattern we have of using a conditional type's root
and re-instantiating part of the root
with the current mapper (potentially after layering on _another_ mapper to map to a specific union member of constraint) is an exponential factory of work. In every case, we make a new mapper for the specific checkType
, combine it with the existing non-distributive mapper, and create an instantiation. This new mapper 1. Still repeats all the old mapper's work, since it's starting from the conditional's root
, and 2. re-does this work for every member we create a new mapper for. And in doing so, we manufacture new type identities for any types not cached on structure, eg objects and (prior to their fix), substitutions, none of the work for which can be pulled from any caches.
If we could find a way to safely reduce or remove usages of getConstraintOfDistributiveConditionalType
and getLowerBoundOfConditionalType
, combined with correct substitution type caching, we can reclaim 80% of the additional time spent by 3.4 on the original example (ie, 3.4s on my machine down to 0.7s on release-3.4
by replacing or removing those.). The key bit here is removing this work _safely_ - we added all of these changes to fix a number of bugs and regressions and despite their flaws, they _are_ useful.
So, this leads me to question our current construction of distributive conditionals and its inefficiency: Why are they as they are? Simply because the current formulation makes is easy to track substitutions as they occur - we don't need to worry about remapping any instantiables within the branches. However this seems unfortunate - we'd really like to be able to say, when T extends U ? T : never
becomes A | B extends U
, that we can _immediately_ distribute. The reason we _don't_ is because we've not made explicit what a distribution is: A distribution operation is the creation of a new type parameter (similar to an existential) extending the original. We avoid actually making this, but the method of construction (finding the root type parameter and pre-mapping it) should make this obvious. If we _explicitly_ do this, we can read T extends U ? T : never
as T extends U ? (T' extends T) : never
, which upon instantiation can become (T' extends A | B) extends U ? (T'' extends T' extends A | B) : never
which then when we're going to the constraint (or instantiated with the constraint), allows us to make A extends U ? (A' extends A) : never | B extends U ? (B' extends B) : never
via union of multiple instantiations. A
and B
themselves can still be type parameters to which this process can continue occurring, allowing us to repeatedly instantiate without needing to reapply from the root. I think that'd save _some_ work, but not all of it - outright avoiding checking a "distributive constraint" at all would be better (which, IMO, we should be able to have - the apparent type of T extends string ? T : undefined
is T & string
and we can use that correctly, distributive or not, the issue is when T extends string ? never : T
- we should have T & not string
which we can't currently represent and use the distributive constraint as a standin for!).
The notes from @RyanCavanaugh seem to imply that the fix in DT on @types/styled-components
at 4.1.14
is a viable solution for the performance regression against 3.4, but I'm not seeing similar speedup in a real-world project.
Using the latest @types/styled-components
at 4.1.14
, I still see about 2x slower compile times on TypeScript 3.4.3
versus 3.3.4000
Aside from raw tsc
compile times, when using TypeScript 3.4.3
the VSCode code helper is still racing at 100% CPU and error information feedback is painfully slow to update.
Comparatively, the VSCode integration is workable with 3.3.4000
, but not what it used to be...
When testing with TypeScript 3.5.0-dev.20190412
, it's somewhere in-between the 3.3 and 3.4 performance, but still slower then 3.3.4000
by a noticeable margin
@alex-sherwin those numbers were with --skipLibCheck
; batch compilation without that flag on is indeed about where you describe it
Here's my benchmark against different typescript version.
tsconfig.json
{
"compilerOptions": {
"target": "esnext",
"module": "esnext",
"moduleResolution": "node",
"lib": ["esnext", "dom", "dom.iterable"],
"allowJs": false,
"allowSyntheticDefaultImports": true,
"esModuleInterop": false,
"isolatedModules": true,
"jsx": "preserve",
"noEmit": true,
"skipLibCheck": true,
"strict": true,
"baseUrl": ".",
"paths": {
// maps for webpack alias
"Apollo/*": ["src/apollo/*"],
"Components/*": ["src/components/*"],
"Utils/*": ["src/utils/*"],
"Admin/*": ["src/app/admin/*"],
// custom types for modules
"*": ["src/@types/*"]
}
},
"include": ["src"]
}
Project Dependencies (reduced for clarily)
{
"scripts": {
"typecheck": "tsc --noEmit"
},
"devDependencies": {
"@babel/plugin-transform-runtime": "^7.4.4",
"@commitlint/cli": "^7.5.2",
"@commitlint/config-conventional": "^7.5.0",
"@wpackio/eslint-config": "^3.2.0",
"@wpackio/scripts": "^3.3.0",
"apollo": "^2.9.0",
"babel-plugin-import": "^1.11.0",
"babel-plugin-styled-components": "^1.10.0",
"conventional-changelog-cli": "^2.0.17",
"cross-env": "^5.2.0",
"cssnano": "^4.1.10",
"eslint": "^5.16.0",
"eslint-plugin-react-hooks": "^1.6.0",
"fork-ts-checker-webpack-plugin": "^1.3.0",
"husky": "^2.2.0",
"less": "^3.9.0",
"lint-staged": "^8.1.5",
"node-sass": "^4.12.0",
"prettier": "^1.17.0",
"typescript": "3.3",
"use-yarn": "^2.2.0",
"webpack-bundle-analyzer": "^3.3.2"
},
"dependencies": {
"@babel/runtime-corejs3": "^7.4.4",
"@rehooks/component-size": "^1.0.2",
"@types/graphql": "^14.2.0",
"@types/react": "^16.8.15",
"@types/react-dom": "^16.8.4",
"@types/react-router-dom": "^4.3.2",
"@types/styled-components": "^4.1.14",
"@wpackio/entrypoint": "^3.2.0",
"antd": "^3.16.6",
"apollo-boost": "^0.3.1",
"core-js": "^3.0.1",
"graphql": "^14.2.1",
"react": "^16.8.6",
"react-apollo": "^2.5.5",
"react-content-loader": "^4.2.1",
"react-dom": "^16.8.6",
"react-router-dom": "^5.0.0",
"styled-components": "^4.2.0"
}
}
| TypeScript Version | @types/styled-components | Perf |
| -- | -- | -- |
| 3.4.5 | 4.1.14 | 11.16s user 0.56s system 154% cpu 7.611 total |
| 3.5.0-dev.20190502 | 4.1.14 | 8.66s user 0.53s system 168% cpu 5.452 total |
| 3.3.4000 | 4.1.14 | 6.79s user 0.39s system 171% cpu 4.180 total |
@weswigham I see you marked this as Fixed, but I don't see the actual commit with a fix. Would you mind providing some info?
@havenchyk Hey man, I know you鈥檙e not asking me, but the issue seems to have been fixed with the release of Typescript 3.5.0-rc.
I believe it鈥檚 discussed a bit here: https://devblogs.microsoft.com/typescript/announcing-typescript-3-5-rc/.
Yeah, I see a note about it, thanks for sharing @maxmathews! TBH, locally I still have better results with 3.3 version comparing to 3.5.RC, but it's not that bad as it was for 3.4. So thanks 馃檹
@havenchyk no problem!
Yea I will say also, it鈥檚 still very slow for me.
Yea I will say also, it鈥檚 still very slow for me
Y'all have a repro and some numbers we can look at?
How do you benchmark? I see visible lag in VSCode a few seconds that I didn't have previously. Things in the code have changed since then, so I can't know it's compiler or new code, but If I correct an error the red error lines actually stay in the wrong place and the entire editor feels sluggish for a few seconds. Autocomplete is also delayed ~2 seconds most times.
Happy to run some comparison, not sure best way to do it.
If the code's public, just linking it and some repeatable instructions'd be best. Failing that, if you launch vscode from the command line with the TSS_DEBUG
set to a port number - you can then connect the chrome debugger to the language server on that port and record a performance trace of the action in question (which can then be saved and shared) - one trace of 3.4 and one of 3.5 would be good.
I'm running TSS_DEBUG=5002 code-insiders .
on build Version 1.36.0-insider (1.36.0-insider)
and when I go to about:inspect
in Chrome don't see anything there.
I do think this may still be related this this slowdown: https://github.com/microsoft/TypeScript/issues/29949
Not to bitch but perhaps it helps, my current setup is quite borked between https://github.com/microsoft/vscode/issues/25312 and this, I'm restarting TS server many times a day, waiting 20+ seconds for it to load, and when it does run it's now oftentimes 5+ seconds delayed (on a 2.9 i9 quad code no less). I'd love to be of any help to get some of this figured out!
Most helpful comment
So, the change that introduced the perf issue is a change that caused us to instantiate the
trueType
of a conditional a bit more eagerly (previously in that example we wouldn't have bothered, unlike in many others). This in and of itself shouldn't be an issue - the instantiation should not be anywhere near this expensive (and it is cached, so the effect on a single-usage example like this is outsized). All that'd be needed was a slightly different argument type before and we'd have been off to the races all the same. So the question that leads to is this: "Why is that instantiation expensive?".From the declaration side, it's obvious that the instantiation becomes expensive because
JSX.IntrinsicElements
has a lot of members and each of those members have a lot of props - cutting down on these or avoiding mentioning them in the types helps. While true, this observation isn't useful - we _want_ to be able to represent all these components and props.From the implementation side, these types are expensive because of how we handle _distribution_. We have this thing called a "distributive constraint" - there's a number of issues open W.R.T. it's unsoundness, but on _top_ of that, the distributive constraint works as a constraint-based work multiplier. Rather than lazily deferring work, the distributive constraint eagerly partially evaluates conditionals in an attempt to "narrow" them while looking at constraints. It does this very inefficiently - every instantiation has to go back to the root conditional and re-instantiate and re-narrow; there is no "progressive instantiation" as most other types (ie, indexed accesses) have (instead the mapper is used as a transform which we we apply over and over onto a different base
checkType
). This process ends up being hideously expensive for distributions over large unions (as we repeat large chunks of the instantiation work up to that point for every union member). Similarly, withingetLowerBoundOfKeyType
we havegetLowerBoundOfConditionalType
which has _the same problem_ - it takes the check type to its constraint and reevaluates the conditional, causing a bunch of work to be kicked off (this becomes the biggest issue once you remove the distributive constraint from the picture).In short, the pattern we have of using a conditional type's
root
and re-instantiating part of theroot
with the current mapper (potentially after layering on _another_ mapper to map to a specific union member of constraint) is an exponential factory of work. In every case, we make a new mapper for the specificcheckType
, combine it with the existing non-distributive mapper, and create an instantiation. This new mapper 1. Still repeats all the old mapper's work, since it's starting from the conditional'sroot
, and 2. re-does this work for every member we create a new mapper for. And in doing so, we manufacture new type identities for any types not cached on structure, eg objects and (prior to their fix), substitutions, none of the work for which can be pulled from any caches.If we could find a way to safely reduce or remove usages of
getConstraintOfDistributiveConditionalType
andgetLowerBoundOfConditionalType
, combined with correct substitution type caching, we can reclaim 80% of the additional time spent by 3.4 on the original example (ie, 3.4s on my machine down to 0.7s onrelease-3.4
by replacing or removing those.). The key bit here is removing this work _safely_ - we added all of these changes to fix a number of bugs and regressions and despite their flaws, they _are_ useful.So, this leads me to question our current construction of distributive conditionals and its inefficiency: Why are they as they are? Simply because the current formulation makes is easy to track substitutions as they occur - we don't need to worry about remapping any instantiables within the branches. However this seems unfortunate - we'd really like to be able to say, when
T extends U ? T : never
becomesA | B extends U
, that we can _immediately_ distribute. The reason we _don't_ is because we've not made explicit what a distribution is: A distribution operation is the creation of a new type parameter (similar to an existential) extending the original. We avoid actually making this, but the method of construction (finding the root type parameter and pre-mapping it) should make this obvious. If we _explicitly_ do this, we can readT extends U ? T : never
asT extends U ? (T' extends T) : never
, which upon instantiation can become(T' extends A | B) extends U ? (T'' extends T' extends A | B) : never
which then when we're going to the constraint (or instantiated with the constraint), allows us to makeA extends U ? (A' extends A) : never | B extends U ? (B' extends B) : never
via union of multiple instantiations.A
andB
themselves can still be type parameters to which this process can continue occurring, allowing us to repeatedly instantiate without needing to reapply from the root. I think that'd save _some_ work, but not all of it - outright avoiding checking a "distributive constraint" at all would be better (which, IMO, we should be able to have - the apparent type ofT extends string ? T : undefined
isT & string
and we can use that correctly, distributive or not, the issue is whenT extends string ? never : T
- we should haveT & not string
which we can't currently represent and use the distributive constraint as a standin for!).