When I have any compilation error(s) there is a rather long delay (around 14s on my machine) before rustc finally exits. Successful compilation is very fast.
I am on Ubuntu 16.04 in VMware Workstation 12.5 for Windows virtual machine.
Steps to reproduce:
$ touch test.rs
$ time rustc test.rs
error: main function not found
error: aborting due to previous error
real 0m13.423s
user 0m13.276s
sys 0m0.112s
rustc --version --verbose
:
rustc 1.14.0-nightly (3f4408347 2016-10-27)
binary: rustc
commit-hash: 3f4408347d2109803edbf53c89c8bce575de4b67
commit-date: 2016-10-27
host: x86_64-unknown-linux-gnu
release: 1.14.0-nightly
LLVM version: 3.9
If it helps, that same version of rustc on Windows 10 works OK for me, there is no noticeable delay after compilation error reporting.
Actually I failed to report that I was using Ubuntu 16.04 virtual machine on VMware Workstation 12 (most recent version) on native Windows 10. I just have tested for this issue on Ubuntu 16.04 on bare metal, and rustc worked fine. The issue seems to be related to virtualization somehow.
$ touch test.rs
$ time rustc test.rs
error: main function not found
error: aborting due to previous error
real 0m0.042s
user 0m0.040s
sys 0m0.000s
Note that the compiler continues after it hits the first error, so the compiler could possibly have been doing real compilation work (looking for more errors), so perhaps that's what's happening here?
Have you considered profiling rustc to see where it is spending most of those 13 seconds?
@retep998 here is what I've gathered from 'perf'
$ touch test.rs
$ perf record rustc test.rs
$ perf report --stdio --header
the results from my Ubuntu 16.04 virtual machine running on VMware Workstation (only few lines from the top for brevity):
# ========
# captured on: Sun Oct 30 21:00:17 2016
# hostname : ubuntu
# os release : 4.4.0-45-generic
# perf version : 4.4.21
# arch : x86_64
# nrcpus online : 8
# nrcpus avail : 8
# cpudesc : Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
# cpuid : GenuineIntel,6,94,3
# total memory : 16415168 kB
# cmdline : /usr/lib/linux-tools-4.4.0-45/perf record rustc test.rs
# event : name = cycles, , size = 112, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap = 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, msr = 6, software = 1, tracepoint = 2, breakpoint = 5
# ========
#
#
# Total Lost Samples: 0
#
# Samples: 53K of event 'cycles'
# Event count (approx.): 53214853068
#
# Overhead Command Shared Object Symbol
# ........ ....... ............................. ...............................................................................................................................................................................................................................
#
96.65% rustc libstd-6eb85298.so [.] backtrace_alloc
1.06% rustc libstd-6eb85298.so [.] swap
0.23% rustc libstd-6eb85298.so [.] backtrace_qsort
0.11% rustc libstd-6eb85298.so [.] read_function_entry
0.11% rustc libstd-6eb85298.so [.] line_compare
0.10% rustc libstd-6eb85298.so [.] read_uleb128
0.10% rustc libstd-6eb85298.so [.] elf_symbol_compare
0.09% rustc rustc [.] swap
0.08% rustc [unknown] [k] 0xffffffff813feb49
0.08% rustc libstd-6eb85298.so [.] read_line_program
0.07% rustc libstd-6eb85298.so [.] advance
Clearly the problem is backtrace_alloc
function.
I will also put here the results from bare metal for comparison.
results for Ubuntu 16.04 running bare metal:
# ========
# captured on: Sun Oct 30 21:16:37 2016
# hostname : kron
# os release : 4.4.0-45-generic
# perf version : 4.4.21
# arch : x86_64
# nrcpus online : 8
# nrcpus avail : 8
# cpudesc : Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
# cpuid : GenuineIntel,6,94,3
# total memory : 32863380 kB
# cmdline : /usr/lib/linux-tools-4.4.0-45/perf record rustc test.rs
# event : name = cycles:pp, , size = 112, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap = 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 2, sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exe
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: intel_pt = 7, intel_bts = 6, cpu = 4, msr = 8, software = 1, tracepoint = 2, cstate_core = 9, cstate_pkg = 10, breakpoint = 5
# ========
#
#
# Total Lost Samples: 0
#
# Samples: 149 of event 'cycles:pp'
# Event count (approx.): 159792095
#
# Overhead Command Shared Object Symbol
# ........ ....... ............................. ...............................................................................................................................................................................................................................
#
10.85% rustc ld-2.23.so [.] do_lookup_x
7.68% rustc librustc_metadata-6eb85298.so [.] rustc_metadata::decoder::_$LT$impl$u20$rustc_metadata..cstore..MetadataBlob$GT$::load_key_map::hf181e213f9856db2
6.81% rustc ld-2.23.so [.] strcmp
6.54% rustc ld-2.23.so [.] check_match
3.45% rustc librustc_metadata-6eb85298.so [.] _$LT$rustc_metadata..schema..Entry$LT$$u27$tcx$GT$$u20$as$u20$serialize..serialize..Decodable$GT$::decode::h6190a6fb1f040282
3.35% rustc [unknown] [k] 0xffffffff813feaf7
3.04% rustc libstd-6eb85298.so [.] mallocx
3.03% rustc ld-2.23.so [.] _dl_check_map_versions
2.84% rustc libsyntax-6eb85298.so [.] _$LT$std..collections..hash..map..HashMap$LT$K$C$$u20$V$C$$u20$S$GT$$GT$::get::h92e875ddcaeef5eb
2.74% rustc ld-2.23.so [.] _dl_relocate_object
2.73% rustc [unknown] [k] 0xffffffff811cbca7
I have just realized that on my virtual machine I had environment variable RUST_BACKTRACE=1, whereas on my bare-metal I had not. Indeed when setting RUST_BACKTRACE=1 on both machines the results are more or less the same (there is a very long delay before rustc exits). So the problem is not with virtualization after all. Definitely the problem lays in backtrace code somewhere.
I started to notice this issue somewhere around a week ago, as I am updating rust nightly very often (daily).
Possibly duplicate of #29293
1.14 beta exhibits this issue. If it goes stable it will be a significant headache/puzzle for everyone who leaves RUST_BACKTRACE set. cc @nikomatsakis
In the same 10/25 nightly that introduced this, the release hello world executable size on x86_64-unknown-linux-gnu
grew from ~650k to ~2MB, not sure if that may be related.
This is negatively impacting the alternate playground, which always sets RUST_BACKTRACE, and is now present in released 1.14.
#0 0x00007f47d8f9acbb in elf_add ()
from /opt/rust/build-debug-assertions/build/x86_64-unknown-linux-gnu/stage1/bin/../lib/libstd-a81d7bd1df01b16f.so
#1 0x00007f47d8f9b5a7 in phdr_callback ()
from /opt/rust/build-debug-assertions/build/x86_64-unknown-linux-gnu/stage1/bin/../lib/libstd-a81d7bd1df01b16f.so
#2 0x00007f47d8c86b14 in __GI___dl_iterate_phdr (
callback=0x7f47d8f9b510 <phdr_callback>, data=0x7f47cfc2f690) at dl-iteratephdr.c:76
#3 0x00007f47d8f9b684 in backtrace_initialize ()
from /opt/rust/build-debug-assertions/build/x86_64-unknown-linux-gnu/stage1/bin/../lib/libstd-a81d7bd1df01b16f.so
#4 0x00007f47d8f9a19e in fileline_initialize ()
from /opt/rust/build-debug-assertions/build/x86_64-unknown-linux-gnu/stage1/bin/../lib/libstd-a81d7bd1df01b16f.so
#5 0x00007f47d8f9a2a2 in backtrace_syminfo ()
from /opt/rust/build-debug-assertions/build/x86_64-unknown-linux-gnu/stage1/bin/../lib/libstd-a81d7bd1df01b16f.so
#6 0x00007f47d8f83668 in std::sys_common::gnu::libbacktrace::print (w=&mut Write,
idx=<optimized out>, addr=<optimized out>, symaddr=<optimized out>)
at /opt/rust/src/libstd/sys_common/gnu/libbacktrace.rs:145
#7 std::sys::imp::backtrace::tracing::imp::write::trace_fn (ctx=<optimized out>,
arg=0x7f47cfc2fdf0)
at /opt/rust/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:101
#8 0x00007f47d0e66609 in _Unwind_Backtrace (
trace=0x7f47d8f83490 <std::sys::imp::backtrace::tracing::imp::write::trace_fn>,
trace_argument=0x7f47cfc2fdf0) at ../../../src/libgcc/unwind.inc:295
#9 0x00007f47d8f83213 in std::sys::imp::backtrace::tracing::imp::write (w=&mut Write)
at /opt/rust/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:42
#10 0x00007f47d8f961e0 in std::panicking::default_hook::{{closure}} (err=&mut Write)
at /opt/rust/src/libstd/panicking.rs:349
#11 0x00007f47d8f92599 in std::panicking::default_hook (info=<optimized out>)
at /opt/rust/src/libstd/panicking.rs:359
#12 0x00007f47d8f92d68 in std::panicking::rust_panic_with_hook (msg=Box<Any>,
file_line=<optimized out>)
at /opt/rust/src/libstd/panicking.rs:553
#13 0x00007f47d55f9efb in std::panicking::begin_panic<rustc_errors::FatalError> (
msg=..., file_line=0x1)
at /opt/rust/src/libstd/panicking.rs:515
#14 0x00007f47d576096d in syntax::parse::file_to_filemap (sess=0x7f47cfc34f10,
path=0x7f47c8002a40,
spanopt=<error reading variable: access outside bounds of object referenced via synthetic pointer>) at /opt/rust/src/libsyntax/parse/mod.rs:213
#15 0x00007f47d575fd47 in syntax::parse::new_parser_from_file (sess=0x7f47cfc34f10,
path=0x3) at /opt/rust/src/libsyntax/parse/mod.rs:160
#16 syntax::parse::parse_crate_from_file (input=0x3, sess=0x7f47cfc34f10)
at /opt/rust/src/libsyntax/parse/mod.rs:98
#17 0x00007f47d92cce52 in rustc_driver::driver::phase_1_parse_input::{{closure}} ()
at /opt/rust/src/librustc_driver/driver.rs:494
#18 rustc::util::common::time<core::result::Result<syntax::ast::Crate, rustc_errors::diagnostic_builder::DiagnosticBuilder>,closure> (
what=<error reading variable: access outside bounds of object referenced via synthetic pointer>, do_it=<optimized out>, f=...)
at /opt/rust/src/librustc/util/common.rs:34
#19 rustc_driver::driver::phase_1_parse_input (sess=0x7f47cfc344d0,
input=0x7f47cfc35a90)
at /opt/rust/src/librustc_driver/driver.rs:491
#20 0x00007f47d92c90e8 in rustc_driver::driver::compile_input (sess=0x3,
cstore=0x7f47c8001060, input=0x7f47cfc35a90, outdir=0x7f47cfc35a78,
output=0x7f47cfc35a58, addl_plugins=..., control=0x7f47cfc353b8)
at /opt/rust/src/librustc_driver/driver.rs:95
#21 0x00007f47d92f9283 in rustc_driver::run_compiler (args=...,
callbacks=&mut CompilerCalls, file_loader=..., emitter_dest=...)
at /opt/rust/src/librustc_driver/lib.rs:221
#22 0x00007f47d923a262 in rustc_driver::main::{{closure}} ()
Hey, this has made it's way to stable and it is a rather serious problem I think. @alexcrichton, perhaps it should have some labels/assignee applied?
We face it in IntelliJ Rust: https://github.com/intellij-rust/intellij-rust/issues/867
We run cargo commands with RUST_BACTRACE=1
by default, so this leaks to the compiler even though we are interested only in the bactraces for the user's code.
If it is useful, here's a flamegraph of the hard working rustc: http://svgshare.com/i/TU.svg
@matklad sounds reasonable to me. I'll nominate for discussion in libs triage as I believe this is basically entirely about libbacktrace
What makes libbacktrace so slow, anyway? I've definitely noticed the very long lags in the past.
What makes libbacktrace so slow, anyway?
Judging by the pretty flamegraph and the perf report, I'm going to guess that something is being eagerly allocated when it doesn't need to be.
I've tried to track this down, but been unable to reproduce with nightly on macOS 10.12.2 or stage1 and stage2 compilers on Ubuntu 16.04 (on 95b14a3b50782a3f680ab3bc75dde813333cb708). While I do notice a slight increase (~100-400 ms) when either the compiler ICEs or a crate panics, I sort of expect that since more work is being done.
However, I can reproduce with nightly (rustc 1.16.0-nightly (468227129 2017-01-03)) on Ubuntu 16.04, which sort of confuses me. I'm not sure what the difference between those and stage1/stage2 builds on the same machine is.
To summarize:
| Compiler | Delay Noticeable? |
| -------- | ----- |
| macOS Nightly | No |
| Ubuntu 16.04 Stage 1 Compiler | No |
| Ubuntu 16.04 Stage 2 Compiler | No |
| Ubuntu 16.04 Nightly | Yes |
This makes benchmarking and testing changes difficult/impossible, since I can't reproduce with a locally built compiler. I'm not really sure what can be said here.
I also confirmed that this is not due to the wrapper script that rustbuild introduces.
Exact measurements below. test.rs
is empty in all cases.
macOS 10.12.2, rustc 1.16.0-nightly (468227129 2017-01-03).
$ time rustc test.rs
error: main function not found
error: aborting due to previous error
real 0m0.061s
user 0m0.034s
sys 0m0.024s
$ time RUST_BACKTRACE=1 rustc test.rs
error: main function not found
error: aborting due to previous error
real 0m0.123s
user 0m0.061s
sys 0m0.031s
Ubuntu 16.04, rustc 1.16.0-nightly (468227129 2017-01-03) -- same as on macOS
$ time rustc test.rs
error: main function not found
error: aborting due to previous error
real 0m0.045s
user 0m0.036s
sys 0m0.000s
$ time RUST_BACKTRACE=1 rustc test.rs
error: main function not found
error: aborting due to previous error
real 0m10.737s
user 0m10.592s
sys 0m0.032s
Ubuntu 16.04, Stage 1 Compiler (./build/x86_64-unknown-linux-gnu/stage1/bin/rustc
) at 95b14a3b50782a3f680ab3bc75dde813333cb708.
$ time ./rustc-stage1 test.rs
error: main function not found
error: aborting due to previous error
real 0m0.041s
user 0m0.028s
sys 0m0.008s
$ time RUST_BACKTRACE=1 ./rustc-stage1 test.rs
error: main function not found
error: aborting due to previous error
real 0m0.153s
user 0m0.136s
sys 0m0.012s
Ubuntu 16.04, Stage 2 Compiler (./build/x86_64-unknown-linux-gnu/stage2/bin/rustc
) at 95b14a3b50782a3f680ab3bc75dde813333cb708.
$ time ./rustc-stage2 test.rs
error: main function not found
error: aborting due to previous error
real 0m0.041s
user 0m0.032s
sys 0m0.012s
$ time RUST_BACKTRACE=1 ./rustc-stage2 test.rs
error: main function not found
error: aborting due to previous error
real 0m0.157s
user 0m0.136s
sys 0m0.016s
It may also be of interested that utilizing strace, it seems that the stage1/stage2 compilers benchmarked above issue hundreds of mmap calls, approximately page-by-page ([pid 8042] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f1a793ad000
) in case it matters, while the nightly rustc issues thousands, from what I can tell. It almost feels like it's loading the libraries in ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/
, possibly. It seems to iterate over those (with open calls), but that might not be during the panic.
Hopefully all of this can help narrow this issue down. I'd be happy to collect any other statistics that could be helpful.
@Mark-Simulacrum interesting! I wonder if this means it's all related to how we're building the nightlies. The main differences I can think of there are:
Now that I list these out, did you enable debuginfo in the compiler you produced locally? IIRC that was a historical problem and I don't think we ever fixed that.
test.rs is empty in all cases.
I see delay even if rustc
is called with non existent file: RUST_BACKTRACE=1 rustc gibberish
.
I wonder if this means it's all related to how we're building the nightlies
This occurs with Rust 1.14 stable on Linux for me.
@shepmaster oh I should say releases*, not just nightlies. They're all built the same way modulo llvm-assertions (which we can now rule out if it's a problem on stable)
FWIW, here's the reproduction steps I have:
# Outside Docker container
docker run -it --rm ubuntu:16.04
# Inside Docker container
apt-get update && apt-get install -y curl
curl https://sh.rustup.rs -sSf | sh
source $HOME/.cargo/env
time RUST_BACKTRACE=1 rustc foo.rs
real 0m12.233s
user 0m12.060s
sys 0m0.160s
For some reason, this reminds me of https://github.com/rust-lang-nursery/rustup.rs/issues/783. Most likely unrelated, but symptoms are somewhat similar.
This reproduces when I use a debuginfo+debug assertions rustc I built, but does not reproduce when I use a non-DI rustc.
70% of CPU time is spent in backtrace_kalloc
.
@alexcrichton
Now that I list these out, did you enable debuginfo in the compiler you produced locally? IIRC that was a historical problem and I don't think we ever fixed that.
Yes. I have debug info and optimizations enabled locally (./configure --enable-optimize --enable-debug
).
Okay, I've done some more debugging/profiling locally. The results seem to imply that nightly builds produce about ~6 million more calls to read_function_entry
in libbacktrace
. Why this is, I don't know, and I don't really know how to find out either. I've attempted to run ./x.py dist
locally, but using the results from that also doesn't appear to show a slowdown with RUST_BACKTRACE
.
@arielb1 What system are you on? I'm trying to figure out why you see a slow down with stage 1 (locally built) rustc with debug info and I don't.
Nightly callgrind results:
550,472 < ???:dwarf_lookup_pc (618x) [/home/mark/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libstd-93390f7535a2b2f7.so]
2,623 < /build/glibc-t3gR2i/glibc-2.23/elf/../sysdeps/x86_64/dl-trampoline.h:_dl_runtime_resolve_avx'2 (1x) [/lib/x86_64-linux-gnu/ld-2.23.so]
18,824 < ???:elf_add (84x) [/home/mark/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libstd-93390f7535a2b2f7.so]
8,334,711 < ???:read_function_entry'2 (163491x) [/home/mark/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libstd-93390f7535a2b2f7.so]
207,159 < ???:backtrace_dwarf_add (3903x) [/home/mark/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libstd-93390f7535a2b2f7.so]
2,307,410,329 < ???:backtrace_vector_grow (76881x) [/home/mark/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libstd-93390f7535a2b2f7.so]
2,316,031,263 * ???:backtrace_alloc [/home/mark/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/libstd-93390f7535a2b2f7.so]
Master callgrind results:
1,949 < /build/glibc-t3gR2i/glibc-2.23/elf/../sysdeps/x86_64/dl-trampoline.h:_dl_runtime_resolve_avx'2 (1x) [/lib/x86_64-linux-gnu/ld-2.23.so]
24,322 < ???:elf_add (86x) [/home/mark/Edit/rust-build/build/x86_64-unknown-linux-gnu/stage1/lib/libstd-bf7987c3b282821c.so]
1,909,428 < ???:read_function_entry'2 (45011x) [/home/mark/Edit/rust-build/build/x86_64-unknown-linux-gnu/stage1/lib/libstd-bf7987c3b282821c.so]
240,807 < ???:dwarf_lookup_pc (787x) [/home/mark/Edit/rust-build/build/x86_64-unknown-linux-gnu/stage1/lib/libstd-bf7987c3b282821c.so]
119,128,760 < ???:backtrace_vector_grow (18050x) [/home/mark/Edit/rust-build/build/x86_64-unknown-linux-gnu/stage1/lib/libstd-bf7987c3b282821c.so]
627,971 < ???:backtrace_dwarf_add (13733x) [/home/mark/Edit/rust-build/build/x86_64-unknown-linux-gnu/stage1/lib/libstd-bf7987c3b282821c.so]
121,823,821 * ???:backtrace_alloc [/home/mark/Edit/rust-build/build/x86_64-unknown-linux-gnu/stage1/lib/libstd-bf7987c3b282821c.so]
The libs team discussed this issue during triage today and our conclusions were:
We'd want to investigate these routes to see which is the most plausible.
The pc-windows-msvc
rustc isn't debuginfo-enabled anyway, because nobody bothered to ship the PDB files with it yet. The only debuginfo is the codeview in the libraries which affects binaries created by rustc. Because pc-windows-msvc
has no downsides to debuginfo in the libraries, if you're going to remove it from other targets please retain it for the msvc targets. Also, can we finally ship the PDB files already?
@alexcrichton
We could also add a way to libstd to disable backtrace printing - the backtrace is only read on ICEs.
So far the assumption is that the addition of debuginfo in the compiler is the cause of this problem, further bullets are predicated on this assumption.
But what is the underlying cause? I don't really know what the amount of debug info is, but 10 seconds with 100% on the modern CPU seems like a huge amount of work. Or is processing debug info that expensive?
@arielb1 I think this happens for all normal compiler errors, not just ICEs, right? A normal compiler error will also use a panic!
to abort the compiler thread, and that will transitively print a backtrace. Right now the backtrace is suppressed, though, so it doesn't show up on stdout. (IIRC)
@matklad I don't believe investigation has been done to the underlying cause. I agree it's quite alarming!
@alexcrichton
Sure enough. A "normal" compiler exit causes a panic!(rustc_errors::FatalError)
. A hook in libstd could suppress backtrace generation in that case.
So the actual bug here has been previously explained by @wthrowe and has an upstream bug which I doubt will ever see activity. The gist of the bug is:
mmap
We're then hitting a pathological case where we're always looking at the whole linked list to realize we need to call mmap
anyway. These allocations all seem to come from read_function_entry
, and presumably with debuginfo there's a lot more function entries.
We have a few ways to solve this:
mmap.c
and force the fallback to malloc
and free
that libbacktrace uses. @wthrowe claims that this is still two orders of magnitude slower than not generating a backtrace (e.g. 0.1s to 1.2s). Could be a good stopgap though. (note the concern about malloc
not always being right I believe is not relevant for our own use case). I also personally measured an increase from 0.013s to 0.097s locally when taking this strategy.read_function_entry
to allocate less, unsure how this would be done. Removing the one obvious allocation apparently didn't help, though, so I suspect the problem is much deeper.Summarizing the timings with the program rustc foo.rs
where foo.rs
doesn't exist, I got:
| method | time (s) |
|-----------|---------|
| no backtrace | 0.013 |
| RUST_BACKTRACE=1 on master | 6.321s |
| RUST_BACKTRACE=1 with no debuginfo in compiler | 0.072s |
| RUST_BACKTRACE=1 with malloc allocation strategy | 0.097s |
Thoughts on which route we should take? I'm personally leaning towards "no debuginfo in the compiler"
I'm also going to tag this as a stable-to-stable regression as this essentially regressed from 1.13.0
I would prefer that we go with the malloc allocation strategy, since people working on rustc itself want debug info for the compiler (and we might want it in nightly for easier debugging of ICEs) and the performance tradeoff is minimal.
Exploring gimli and other non-libbacktrace options can happen after February 2nd, I think, when we delete Makefiles and can start using crates off crates.io
.
@Mark-Simulacrum oh sorry https://github.com/rust-lang/rust/pull/38984 landed more quickly than I thought it would! To clarify --enable-debug
and such still enable debuginfo in the compiler. I only changed releases. That means that local development should still work out just fine.
I personally prefer to not change libbacktrace much to make future vendoring efforts easier, but that's mostly just me.
What I really want is for us to adopt some kind of library that works across platforms, so that mac can enjoy line numbers too! (I'm not sure of the status on windows here...)
Most helpful comment
@matklad sounds reasonable to me. I'll nominate for discussion in libs triage as I believe this is basically entirely about libbacktrace