Note, that there is no any significant CPU load. It looks like there is an intentional delay somewhere in Roslyn code:
Interesting, that the delay is much longed on preview tab, if I double click on documents, they colored faster (but still not fast enough):
The guidelines appear faster in both cases, but slowly, too.
All of these intentional delays before something happens is irritating and need to be fixed. It makes it feel like I'm using a 20 year old computer. Don't devs normally have pretty powerful computers compared to normal users? Click an identifier, and you have to wait a long time before something is highlighted. During that time, you can't press Ctlr+Shift+Up/Down
to move to the next reference. AFAIK, there are no options in VS settings dialog that allows anyone to change these delays. And it's gotten much worse in VS2017.
Don't devs normally have pretty powerful computers compared to normal users? Click an identifier, and you have to wait a long time before something is highlighted.
It's not a matter of how powerful your computer is. It's an issue of how annoying/flashy the experience feels. We've tried out things like having reference highlighting show up immediately, and it's hugely distracting and annoying (got lots of feedback about this). Many people don't like arrowing through the file and having lots of flashing happening while that goes on.
As such, we put a small delay on things like reference highlighting so that we can get a better sense "is the user just navigating through this identifier, or are they navigating to this identifier.
Note: the delay we have for semantic classification, for things like typing, is only 250 MS. Or, less than the time it takes to blink (~300ms). This is also the delay we have when the caret moves to a specific location for reference highlighting.
Now, there is still the time for the language to actually produce the results, which is added on top of that. We'd have to see what was doing on with these files. From my recollection, we try to avoid as much synchronous/expensive code on document load so that no users on any machines feel any excess delays in getting to their content. For something like the provisional tab this is even more important. The provisional tab may change very rapidly (for example, as a user navigates through something like the error list). As such, we want to ensure we can supply as many cycles to that task as possible and not bog down anything by doing extremely expensive computations (like semantic classification).
Other examples are things like the squiggle delay. We usually have computed the new errors quite quickly. But the experience is terrible if we show them as we get them. Pretty much the moment you start typing, until the point that the code is fixed, you just see red squiggles appearing/disappearing all around you as your code changes meaning character by character.
A lot of the delay is to allow people to focus on just writing hteir code, without feeling like their IDE is a yapping dog trying to get their attention ever keystroke :) Finding the balance of not being too intrusive, while not feeling like results take too long to appear is a conscientious affair.
The code on the gifs is C#, not F#.
So you find 5 second delay (yes, exactly 5 seconds) before semantic colorization appears for 25 lines file acceptable? No CPU load at all.
250ms is too long (IMHO) before highlighting references. I have to wait for this artificial delay before Ctrl+Shift+Up/Down works. Why does this command even require highlighting to work? :)
When the text view is shown for the first time there should be no delays (classification, guide lines, whatever). The user wishes to read the code, so everything should be visible as fast as possible. I think this will help a lot so it doesn't feel so slow.
When the text view is shown for the first time there should be no delays (classification, guide lines, whatever). The user wishes to read the code, so everything should be visible as fast as possible. I think this will help a lot so it doesn't feel so slow.
馃憤
Why does this command even require highlighting to work?
Right now, because we use the same background tagging infrastructure to determine the match. Feel free to file a bug (i think we may already have one) that the system should jsut go compute the results if there are no available tags to use.
Speaking of squiggle delays, it take over 2 seconds to show errors which have already been detected, or 7 (yes seven!) if you're using the preview tab:
So, is this the right bug for that issue, or would you like another one?
250ms is too long (IMHO) before highlighting references.
We can consider some sort of knob if you think that. However, we've tried lower values, and there has been clear and consistent feedback that the UI feels far too noisy and unpleasant.
Speaking of squiggle delays, it take over 2 seconds to show errors which have already been detected
Yes. The delay on squiggles is 1.5 seconds. We've experimented with lower values in the past and there has been near revolt from people who have tried it because it makes the user experience so unpleasant. If you want to try it out, go build a version of roslyn with squiggles set to no delay, and try to use the IDE :)
The critical thing to realize is that most of the time people are just typing and don't want distractions. During that time, popping up squiggles is super annoying and aggravating. That does mean though that if someone types something, and then pauses because they do want to see what issues they have, they have to wait 1.5 seconds.
We feel like that's an appropriate tradeoff. For the huge amount of time when you want distraction free typing, the IDE gets out of your way. And anywhere from 50ms-1500ms (depending on the feature) you'll then get the information. This is a balance of not being annoying, while also being able to get the information to people that want it in a reasonable amount of time.
7 (yes seven!) if you're using the preview tab:
It looks like preview tabs put a long delay on taggers (likely on the editor side). This is likely so that people can quickly do things like F8 through things like the error list. Feel free to file another bug on that. We can talk to the editor/platform and see what that is based on, and if it's appropriate or if it could be changed.
However, the 1.5 seconds for squiggles is very much by design, and is based on an enormous amount of real world usage, experimentation, and user involvement that showed that lower times were felt to be vastly unpleasant.
--
Note: i woudl be amenable to a PR that allows for these to be user configurable (through our standard Roslyn Options mechanism). Users who then said "i'm ok with a super aggressive experience" could change those delays down. (Of course, if they then found that things like squiggles showed up too rapidly, would have only themselves to blame :) ).
@CyrusNajmabadi Did you look at the example GIF? There's no typing involved. It's about _opening_ a document, and errors that have already been detected by full solution analysis. Those error should be instantaneous.
Note: what's also interesting is that people often feel one way about this stuff until they actually try it out in real scenarios. For example, if you set up the IDE with different delays for squiggles, and you ask people which squiggle delay they like, they'll often try it out and then just say they prefer the ones with shorter delays.
but then when they actually try to code and do real work, they immediately hate it. It's a matter of what mindset they're in. If they're in the mindset of "i want the squiggle information", then shorter is better. but the majority of the time, they're not in that mindset, and the squiggles are seriously annoying and intrusive on the experience. So in order to actually determine if something is better, you have to actually get people to use it for a while and get a true sense from them if it's a benefit or not. You can't jsut look at gifs/videos of squiggles appearing, and ask "which is better?"
Did you look at the example GIF? There's no typing involved. It's about opening a document, and errors that have already been detected by full solution analysis. Those error should be instantaneous.
We could consider changing things so that on open, we attempt to have no delay in the taggers. But it's likely that would cause some level of perf regression on our open file tests.
For example, just doing outlining when a file is opened significantly impacts opening time (due to the time needed to parse the file). Now, with squiggles, we could potentially do better if the data was already computed.
However, you mention "and errors that have already been detected by full solution analysis.". Full solution analysis is off for the majority of customers. So we don't actually have the squiggles when we open the file for most customers. So this would just be an optimization for the vast minority of users while not changing anything for majority. That's not something i'm feeling strongly about addressing as i'd rather do things that benefit everyone first :)
Looking at the code, here's my hypothesis as to what's going on:
We kick off work to compute the initial set of tags. However, this work is not considered special in any way. As such, it's quite likely other events come in that tell us that something has changed, and that we should compute the new set of tags at some point in the future. Because of this, we stop the existing computation, with the belief that it's a unnecessary usage of CPU since we have a new request to process later.
This means, that it's likely that the file gets initially tagged at the speed of the slowest request-delay that it has registered for, and hears about on open.
On top of this, as mentioned before, our taggers have a configured delay before they update the UI (to prevent the noisyness i mentioned earlier).
--
So, as a potential proposal i could see us adding a bit somehow to track if this is the "initial" tag request. The initial tag request would start immediately, be uncancellable (except for document-close), and would not delay updating the UI.
This would help address some of the delays on file open. Though it would not address:
In essence we'd be separating out a codepath for "open file tagging" versus "something changed tagging". I'd be willing to investigate this in hte next couple of weeks. Though i'm definitely concerned about what this might do to open-file perf.
However, you mention "and errors that have already been detected by full solution analysis.". Full solution analysis is off for the majority of customers.
Fine, forget I mentioned it. The experience is exactly the same, and equally annoying for errors found during a build.
It looks like preview tabs put a long delay on taggers (likely on the editor side). This is likely so that people can quickly do things like F8 through things like the error list. Feel free to file another bug on that.
I'm reasonably confident _this_ bug was opened for precisely that: a 5 second delay for preview tabs.
Preview tab issues shoudl be filed against the platform/editor. You can use vsfeedback or developcommunity to do that. Thanks!
I'm fine with this bug staying here though for the parts of the equation that Roslyn is involved in.
The initial tag request would start immediately, be uncancellable (except for document-close), and would not delay updating the UI.
This would be awesome. Thanks for the investigation!
Yup! I think that would be reasonable as it would be effectively special casing doc-open. Note that it would not address @0xd4d's complaints about the speed when navigating/typing. However, i'm happy to also change things to be option-based. That way an advanced user could tweak these values if they don't mind the results.
@CyrusNajmabadi I think that's the most reasonable solution. Document open should try to colourise as quickly as possible, and an option in the settings to delay these tasks while navigating seems like a win for all involved.
The main problem for myself is that it feels like VS2017 is slower than VS2015 when in fact it's not - there are just more delays in. Artificial slowness is even more annoying than high-load slowness IMO.
This would be awesome. Thanks for the investigation!
Agreed, thank you @CyrusNajmabadi! It probably seems like we complain a lot, but we really appreciate your work :-)
The main problem for myself is that it feels like VS2017 is slower than VS2015 when in fact it's not - there are just more delays in.
This should not be the case (at least on the Roslyn side). We didn't add any delays. Indeed, we fixed a few bugs where some delays were being compounded unintentionally. If you can show examples, i'd like to investigate (ideally with other bugs).
Note: i can't speak for the rest of VS. It's possible that some delays were added. The thing that people need to be cognizant of is that we collect a huge amount of data regarding user-impactful delays. i.e. things that make a file actually open more slowly. Or things that cause delays while typing. Sometimes in order to address these it might be slight delay elsewhere. i.e. we're trading a perceptible pause/hang with an async delay. This can make things feel much 'smoother' at the cost of having certain thigns not appear right away.
For example, say that our 98% typing perf is <10ms. But 1% of all typing is above 50ms, and 1% is above 150ms. We might have made a change to get 99% under 10ms, but by taking something that was occasionally causing the hiccup, and delaying/making-more-async. The more 'immediate' stuff you do, the more chance you can impact the experiences you want to be the smoothest. It's a difficult tradeoff and we definitely drive a lot of this by seeing how bad it across the millions of machines that we get perf-delay data from.
--
Note the above it just information. It in no way means htat we don't want to improve the scenarios here :) Just that it's rarely as simple as "we'll just make change X, and it will be a net positive for every user."
Also, to make people here feel better: one of the prime focuses for the core platform/editor team for the next few VS updates is pure perf around the editor/typing experiences. That team is working closely with us to revamp core parts of the VS platform itself to provide a system that can both be faster, and which we can more easily get perf data on to fix. As an example, we're investigating:
entirely changing how command handlers work. specifically, making it so that they are all independent (instead of chaining them).
that sounds a lot more straightforward to work with 馃槀
Yes/no. Lack of chaining means some things (like completion) are more difficult. Completion has expectations about how characters are processed especially so that:
all happen properly. But we're believing that really completion is the exception when it comes to command handlers, and most others can do without that control.
--
Also, many command handlers want to run before the character typed makes it into the buffer (think about the inline-rename cases where it doesn't want you to type bogus characters). Whereas, other handlers want to run after the character is in the buffer (and after other features may have processed it). It's surprisingly tricky. But we're optimistic we can get it right. Fortunately, we have a tons of tests around this space as we hate when we screw thing up here. So tht will help a lot :)
It seems to me that the way things are working in the Preview tab is optimized for clicking through multiple files, or cycling through selections with Go to All. Do we have data on how often people open something as a Preview before committing to a particular file so they could edit it?
I think semantic classification (though expensive) makes sense to kick off ASAP here because it's very much in line with _looking_ at code to get an idea what what it's doing rather than writing code in the editor.
Ok. Here's what i've been able to get:
Note: this is after VS has been primed a bit. Opening files can still have some amount of lag for semantic tags. This is because we literally have to do things like load in assemblies and do semantic analysis. This actually does take time no matter what. However, once those compilations and whatnot are cached, then operations like this can be super fast.
Here's what it's like when using hte provisional tab:
As you can see, there are perceptible delays. Some of these are unavoidable. In order to get outlining/classification we need to do things like parse the file to get the syntax tree, and we need to do all the semantic classification. This actually takes time (even on beefy machines). The choices would be:
Note: 1 is not hypothetical. We had synchronous outlining in the past and it absolutely tanked performance for many users. We cannot idly add synchronous blocking paths to file-open. And if it's async, then you can observe this sort of thing.
For me the basic colorization should be quick. Extras on top of that like brace matching. block structuring and semantic issues like errors. I'm ok with a slight delay, When typing prioritize basic colorization, then after a typing pause (significantly longer then average) trigger the extras like semantic.
@AdamSpeight2008 That's how things currently work for typing.
I have another PR that also improves things for the squiggles case: https://github.com/dotnet/roslyn/pull/18385
It seems to me this boils down to different categories, maybe even modes.
Writing (or editing) code. In this mode, you want _immediate_ typing response, and delays in the markup are expected. TODO: Make the delay time a user-setting.
Maintaining (or refactoring). In this case you want the markup ASAP when opening the document (without affecting open document, and moving around, speed). A document editing operation (not the current buggy VS behavior when it thinks a _view_ operation is an editing operation worthy of going on the undo-stack) would immediately kick this mode into category/mode 1.
Opening a source file where for all intents and purposes the user has a valid expectation at least the error positions should be marked immediately upon opening the document. Build errors and full-solution-analysis falls into this category.
Would this be a reasonably correct summary?
I've not read this whole thread, but I just wanted to mention that I think the default Roslyn delay in type-checking and presenting fresh red-squigglies in the current active file is too long for F#.
See my comment here: https://github.com/Microsoft/visualfsharp/issues/3000#issuecomment-299855523
@vasily-kirichenko @CyrusNajmabadi - I've noticed this problem too and I do feel like the delay is too long for F# programmers, particularly when writing data scripts. I'm not sure why - but I think the highly type-inferred nature of F# may mean that people rely on the presence of red-squigglies to give very quick error feedback.
TBH this also feels like a regression for F# developers because previous versions of Visual Studio gave much faster feedback. As you can see in the linked thread, this can also mean that programmers begin to blame something else, e.g. too many types or slow type checker or the like.
Note: the delay we have for semantic classification, for things like typing, is only 250 MS. Or, less than the time it takes to blink (~300ms). This is also the delay we have when the caret moves to a specific location for reference highlighting.
Now, there is still the time for the language to actually produce the results, which is added on top of that.
I think this is the problem in case of F#. F# language service (and the compiler) is 1. not incremental in file scope (it recheckes the whole file every time it's changed) 2. several times slower than C#'s. What we have as a result: editor features responsiveness feels OK in C# editor and feels very slow - in F# editor, because C# language service adds almost nothing to the artificial delays Roslyn inserts everywhere, but F#'s one can add up to several seconds or even more.
I suggest to add a property that we can pass to Roslyn, like "delay base", from which all the delays that Roslyn put are calculated. If we pass 0
, no artificial delays should be added anywhere.
Just to add a bit more weight to this, @dsyme is 100% correct from an experience perspective that F# developers rely on immediate feedback on the squiggles when they're using type inference. I like @vasily-kirichenko's idea (or something like it) so that we can opt out of these delays.
Please file a bug on Roslyn to expose a way for F# to programmatically change these delays.
@CyrusNajmabadi Done. #19347
Most helpful comment
I've not read this whole thread, but I just wanted to mention that I think the default Roslyn delay in type-checking and presenting fresh red-squigglies in the current active file is too long for F#.
See my comment here: https://github.com/Microsoft/visualfsharp/issues/3000#issuecomment-299855523
TBH this also feels like a regression for F# developers because previous versions of Visual Studio gave much faster feedback. As you can see in the linked thread, this can also mean that programmers begin to blame something else, e.g. too many types or slow type checker or the like.