No hard data, but running TSLint on a project's worth of TypeScript files has been feeling a little slow to me as of late. Would be worth looking into and benchmarking to see if there's a particular code chokepoint or rule that is slowing things down. Also would be neat to see how performance changes as the TypeScript language services change.
@JKillian anecdotally, I've noticed this as well.
I generated a quick flamegraph based off of running TSLint on its own src with its own configuration. The first point of note is that no-unused-variable and right after it, no-use-before-declare are taking up about the same runtime as all the other enabled rules combined. In this case, the recursive nature of walking the AST makes it a little hard to break down the flamegraph at a more granular level than this.
However, from recall and confirmed by a quick grep, these two rules are the only two rules which use special ts.LanguageServices APIs: they both use the getDocumentHighlights method to find other locations a variable is used.

And indeed, it does seem that most of the runtime is spent in getDocumentHighlights (the pink nodes):

Here's a breakdown of one of those getDocumentHighlights calls:

(Note: these images are from a smaller profile than in my first comment)
So thinking about this it makes sense it's slow right? Typescript language services are not really designed with the use case, these rules check _every_ variable is used. I haven't checked, but I'm pretty sure language services do not cache variable usage, so essentially each variable check is traversing the subset of the AST in which the variable can be in scope.
The solution is to not use language services in cases where all variables are inspected. Instead there needs to be a block scope aware Walker that accumulates variables.
Basically build up a cache, but because of 'use before declare' that will need to process the file and then match up unresolved variables at the end.
Or my premise is totally wrong :P
Anyhow, I was going to rewrite no unused variables in this way to see how complex it is... doing so should also fix the fact it doesn't detect unused imports in all situations... just wanted to see if anybody has more of an insight on this than me...
It might also be a candidate for shared data across rules that want to inspect variable usage. One step at a time though...
@lowkay yeah that sounds like a fair assessment and reasonable approach forward. Let us know how it works out!
So I have it matching variables in scope and ruling out some declarations based on simple usages (like in binary expressions or return statements). However, I need to work on extending this approach to class members as there are a bunch of edge cases that can't use simple name matching e.g. inside a class method:
someMethod(): void {
const me = this;
me.property = "some value";
}
To catch this type of usage you have to know the type of me and match against that property. I'm sure there are a bunch of other edge cases. I want to get the main ones explored and compare the performance before delving into all the nooks.
I also took a look at the language services implementation and while it is complicated it tries its best to figure out a subset of the AST to explore for usages. So my premise about the cost seems correct, it has to explore the AST for every declared variable, making it pretty terrible for this use case.
Any plans to integrate profiling into the build system so that regular contributors can see what effect they're having?
It wouldn't make sense to run in CircleCI, but a local version running as a one-off against a large code base would help determine the effect of a change
Regarding @andy-hanson's comment - I'd love to have some sort of --performance flag, either in the CLI or as a separate tool, that calls process.hrtime before and after each rule run to see how long each rule takes.
Thoughts?
@JoshuaKGoldberg You can use the builtin profiler in node. Just run node --inspect --debug-brk path/to/tslint-cli.js and use the profiler in chrome devtools. You can then sort by function name and find out how long MyRule.apply takes. You can also generate a flame graph there.
Note: the profiler does not stop recording automatically when tslint is done, so you have to click on stop yourself.
Yes, everybody on this thread knows how to do that, but for newer folks that don't know (or just for ease of convenience) it might be nice to have a utility for it.
Been looking for these comments on how to do it. It would be nice to include some section in the docs on how to measure performance.
@jmlopez-rod I can't recall exactly how I made those flamegraphs, but I believe it was using the general process described here: https://nodejs.org/en/blog/uncategorized/profiling-node-js/
closing in favor of the more actionable issue #2522
Most helpful comment
And indeed, it does seem that most of the runtime is spent in
getDocumentHighlights(the pink nodes):Here's a breakdown of one of those

getDocumentHighlightscalls:(Note: these images are from a smaller profile than in my first comment)