@DanTup
dart_analysis_server --lsp emits very long response for textDocument/completion.
This response makes editor freeze.
The log file is here.
lsp-log.txt
The target dart file is this. -> https://github.com/zesage/flutter_compass/blob/master/lib/main.dart
Which editor are you using? I recently added completions for symbols that aren't yet imported (as exists in the other editors) which will definitely have made the set of results much larger. The impact of this may depend on the editor.
You can disable this feature by setting suggestFromUnimportedLibraries: false in your initializationOptions (how you configure this will depend on which editor).
@bwilkerson @natebosch I made this default to enabled because I thought it was good functionality and there were no perf issues in VS Code but I now wonder whether it may be better to make this opt-in? It means some users might not know of its existence (unless the editor exposes an obvious setting - which we probably would in VS Code - and default to true). WDYT?
IMO defaulting to true makes sense
That would be my preference, though I don't want it to cause issues that are frustrating to users. Do you know what the performance is like in Vim? It is a lot of data, so it might depend a lot on how the particular editors LSP client scales based on the number of items.
I just remembered from #37011 that @itome is using emacs. I had a quick search and found this from just yesterday:
https://www.reddit.com/r/emacs/comments/brc05y/is_lspmode_too_slow_to_use_for_anyone_else/
it sounds like it may be similar and mentions changing gc-cons-threshold to adjust GC. There's also this comment:
AFAIK there will be some speed improvements with native json parsing added to newer Emacsen.
So I think probably this is an emacs performance issue, and using the setting above is probably a reasonable workaround in the meantime.
@itome let me know if that sounds reasonable and whether the flag helps with the issue. Thanks!
Yes I'm using emacs, I checked detailed cpu usage with profiler.
When dart_analysis_server active, json-read-from-string takes 20% of cpu usage and Automatic GC takes 38% (While both of them are less than 1% when using rust language server).
Set gc-cons-threshold to 10000000000 makes Automatic GC 0% (But maybe this is not good because emacs can't release resources)
And native json support is new feature of Emacs27 (while current stable version is 26.2).
Maybe this have to be fixed in client side.
I opened issue in emacs-lsp https://github.com/emacs-lsp/lsp-mode/issues/851
Thanks for the info. It sounds like there's nothing to do on the server then. We have the initialisation option mentioned above that allows you to turn off this feature, but otherwise it sounds like you have a workaround (increasing GC threshold - maybe you could set it to something a little lower if you're worried about not having GC, but high enough to not cause this issue) and Emacs27 may improve things without it.
Let me know if you think this isn't the case, or if you hit any other issues. Thanks!
@itome
Set
gc-cons-thresholdto 10000000000 makesAutomatic GC0% (But maybe this is not good because emacs can't release resources)
FTR 10000000000 seems to be too much ~9gb .
@DanTup Thank you for your polite reply and sorry for the confusion.
@yyoncho Thanks, it's just for testing of json parser speed without gc time.
Maybe 10000000 is enough, I think.
@DanTup
Thanks for the info. It sounds like there's nothing to do on the server then.
What you can do on the server is remove the "documentation" from CompletionItem and support completionItem/resolve this will cut probably 80% of the size of the textDocument/completion.
@yyoncho good idea! I've already added support for completionItem/resolve to support this feature, but docs are currently only available in the initial request. I'll make a note to have a look at what's involved in changing this soon :-)
@yyoncho @itome FYI I just landed changes that reduce the JSON significantly (removing docs, edits, and all values that were optional and explicitly set to the default). In my testing a full request/response in VS Code for 8.5k items went from almost 3 seconds to around 300ms (and from around 8MB JSON to around 2MB).
Hopefully this will improve things somewhat for Emacs if you want to try re-enabling this. You'll need to wait for tomorrow's nightly (for Dart) or for it to roll into Flutter to get the changes though. The related commits are shown tagged at the bottom of https://github.com/dart-lang/sdk/issues/37163.
@DanTup Very good! I'll try it tomorrow and see if the completion speed is optimized.
@DanTup I tried nightly build! It seems usable enough! thanks.
Excellent :-) If you do hit any other problems with the LSP server, do open issues here and I'll take a look :-)
Most helpful comment
@yyoncho good idea! I've already added support for
completionItem/resolveto support this feature, but docs are currently only available in the initial request. I'll make a note to have a look at what's involved in changing this soon :-)