I've noticed RLS is consuming much more memory with all projects on Linux. My tests indicate this is not an issue with RLS itself.
Downstream: https://github.com/rust-lang/rls/issues/1188
Originally I produced the regression with stable vs nightly.
Memory usage running rls --cli. Reading taken after running once until built, quitting and re-running again and waiting for build complete.
| Project | Linux nightly-2018-12-14 | Linux stable 1.31 |
|---|---|---|
| cargo init --bin | 80 MiB | 25 MiB |
| rust-random/rand | 500 MiB | 45 MiB |
| rust-lang/regex | 1.5 GiB | 81 MiB |
Trying to narrow down the regression I found it appeared in nightly-2018-11-06 by bisecting nightly channel rls releases.
Testing as above with the regex crate:
rustup run nightly-2018-11-03 rls --cli -> 100 MiBrustup run nightly-2018-11-06 rls --cli -> 1.5 GiBTo rule out RLS itself, I took current RLS code 173be7729ae9ea41303fbd84f339f76554c9d9af,
plus a small fix for older compilers
diff --git src/build/rustc.rs src/build/rustc.rs
index be3c52c..9ebc925 100644
--- src/build/rustc.rs
+++ src/build/rustc.rs
@@ -301,6 +301,7 @@ impl<'a> CompilerCalls<'a> for RlsRustcCalls {
edition: match state.session.edition() {
RustcEdition::Edition2015 => Edition::Edition2015,
RustcEdition::Edition2018 => Edition::Edition2018,
+ _ => panic!(),
},
};
let files = fetch_input_files(state.session);
Again testing as above with the regex crate _(plus the RLS source checked out next door)_:
rustup run nightly-2018-11-03 cargo run --manifest-path ../rls/Cargo.toml --no-default-features --release -- --cli -> 100 MiBrustup run nightly-2018-11-06 cargo run --manifest-path ../rls/Cargo.toml --no-default-features --release -- --cli -> 1.6 GiB_Tests use regex d4b9419 on Arch Linux_
| Release | Idle memory usage |
|---|---|
| 1.31.1 | 62 MB |
| 1.32.0 | 1.6 GB |
| 1.33.0 | 800 MB |
| Release | Idle memory usage |
|---|---|
| 1.34.2 | 740 MB |
| 1.35.0 | 1.3 GB |
| 1.36.0 | 1.0 GB |
| Release | Idle memory usage |
|---|---|
| 1.31.1 | 83 MB |
| 1.37.0 | 2.4 GB |
I can confirm this is caused by #55238, cc @alexcrichton.
I tried building rls with jemallocator, but it didn't seem to make a difference.
Thanks for the digging @alexheretic and @ehuss!
I suspect that this is caused by the jemalloc update, perhaps a bug in jemalloc (cc @gnzlbg if you've got knowledge on this). A quick fix for this issue, I believe, would be to link jemallocator to just the rustc binary rather than the librustc_driver crate, I'll send a PR for that.
I'm hoping that https://github.com/rust-lang/rust/pull/56986 will fix the regression for the RLS
With 1.32 coming out I thought I'd retest rls memory usage to provide some more recent numbers.
| Project | Linux stable 1.31 / nightly-2018-11-03 | Linux stable 1.32 |
|---|---|---|
| rust-lang/regex | 80 MiB | 1.4 GiB |
So this regression has not been addressed and will affect the next stable release.
@alexheretic I think it is fixed in 1.33. Maybe the fix should have been backported?
For reference, the fix is in #56986 and #57287.
I think it's not fixed in Nightly.
In my considerably sized private project stable rls takes 100-120 MiB idling and goes up to 250 MiB after code change but drops to 100-ish MiB when it's done.
Nightly rls takes over 900 MiB idling, code edits increase it's memory usage by some amount (0.1 for small edits and more for bigger ones) and it does not decrease when it's done. Yesterday I noticed rls was taking over 3 GiB at the end of day.
OS: Fedora 29
I haven't noticed the memory usage of RLS ever coming down on the nightly channel since I reported this. Testing current nightly I still get 756 MiB for regex. Which is a bit better as it's now only ~10x worse.
This regression is quite unfortunate, it means users have to manually limit their RLS instances to avoid RAM filling whereas before 5-10 idle RLS processes could be reasonably ignored.
I'm starting to think all executables (tools) have to do something like rustc in https://github.com/rust-lang/rust/pull/57287
Nightly rls will use malloc@@GLIBC_2.2.5 from ancient glibc version because rust provide unversioned symbol:
$ nm /home/mateusz/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/rls | grep ' malloc'
U malloc@@GLIBC_2.2.5
0000000001820a98 d malloc_impl
$ nm /home/mateusz/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/rustc | grep " malloc"
00000000000069a0 T malloc
Stable rls 1.31 uses on first malloc instance it can find making it use jemalloc from rust:
$ nm /home/mateusz/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rls | grep ' malloc'
U malloc
0000000001713a18 d malloc_impl
I'm sorry, I only checked on MacOS which seems to be fixed. Indeed, Linux is still broken. Curious since I was expecting them to be the same. Looks like Windows was never affected.
Looks like Windows was never affected.
Windows builds stopped using jemalloc long time ago because of bugs it was causing.
I tested RLS built in two ways:
jemalloc-sys hack (just like one for rustc)jemallocator crateI'm totally puzzled at this point.
In my test case RLS from nightly (this one uses system allocator) after doing few changes to the files takes over 900 MiB and increases for every further change. There are no peaks or reductions in memory usage.
My jemalloc RLS build on the other hand depending on how I change the files peak at 1.1-1.7 GiB memory usage and later drop to 900-ish MiB.
Does anyone have an idea of how we can address this?
Nightly rls will use
malloc@@GLIBC_2.2.5from ancient glibc version because rust provide unversioned symbol:
So this appears to be the problem, it should just use the malloc symbol, and that's the symbol that jemalloc-sys should be emitting (you can verify this by checking whether jemalloc-sys is being built _without_ --cfg prefixed - that is, --cfg prefixed should not appear in the rustc invocations when building jemalloc-sys).
How is the RLS compiled ? Is it dynamically or statically linked against rustc? If its dynamically linked maybe it just needs to add jemalloc-sys as a dependency.
@nnethercote this might be some low hanging fruit to fix performance-wise (10x memory consumption due to a linking error).
@gnzlbg I built RLS with jemalloc-sys (the same trick as rustc does) as a part of Rust build and separately with jemallocator. I verified it linked to jemalloc but things got only worse. I tried to profile memory usage on RLS from Fedora repository (which doesn't use jemalloc at all) and all the tools report only dozens of KiB consumed.
I discovered interesting thing, external profiling tools gather data only for the first 0.2 second since running command rls --cli. Looks like the executable it routed via rustup and it confuses the tools.
There are no real significant leaks but allocations in librustc* are way too big:

Probably enforcing jemalloc for Rust libs would workaround the issue but it'd remain broken for Linux distributions who ship their builds without jemalloc.
Not sure how active this issue is, but working on 1.33.0-nightly (bf669d1e3 2019-01-25) on Linux (Ubuntu 18.04), RLS (via VS Code) is currently using ~1.5 GB of memory, even when I'm doing nothing at all.
This is still very much an issue. Here are some fresh tests, again running rls --cli a couple of times and waiting for idle. These tests are on regex d4b9419.
| Release | Idle memory usage |
|---|---|
| 1.31.1 | 62 MB |
| 1.32.0 | 1.6 GB |
| 1.33.0 | 800 MB |
Currently experiencing this on what is a very small crate. I can get 9.5GB of memory usage if I try to work through the slowness for a while. It gets to 2.5 GB almost immediately.
Pinging @alexcrichton, previous attempts didn't solve RLS issue.
Short summary:
I built RLS in-tree with https://github.com/alexcrichton/rust/blob/ccbf28eae8b345db3ede51112cc32c756a084bd9/src/rustc/rustc.rs#L24-L43 and stand-alone with jemallocator. I made sure it picks malloc from jemalloc but it only made things worse: https://github.com/rust-lang/rust/issues/56980#issuecomment-457553036
Example of memory usage: https://github.com/rust-lang/rust/issues/56980#issuecomment-462792911
It affects official Rust builds for Linux and distributions not using jemalloc for builds. Windows seems unaffected.
Any clue what could I try next?
You can see if #59436 fixes it, by building with it applied locally until it lands, but... I doubt it.
Any clue what could I try next?
If you are sure that there is only one malloc symbol in your binary, and that's the one from jemalloc, then you mentioned above that the allocations for librustc are way too big. You might want to try to debug why. Is rustc requesting that much memory?
It's still bad.
| Release | Idle memory usage |
|---|---|
| 1.34.0 | 750 MB |
Most helpful comment
It's still bad.
| Release | Idle memory usage |
|---|---|
| 1.34.0 | 750 MB |