Rust: Performance regression in pattern matching

Created on 5 Sep 2016  路  32Comments  路  Source: rust-lang/rust

I discovered the cause of a fairly big performance regression (from ~22s to ~29s in a specific benchmark) in my small BF interpreter and I succesfully minified the code:

#[derive(Clone, Copy)]
enum Inner {
    A,
    B,
    C,
}

enum Outer {
    X(u32),
    Y(Inner)
}

#[inline(never)]
fn print(x: &Outer) {
    match *x {
        Outer::X(num) => println!("{:?}", num),
        // Replace this...
        Outer::Y(i) => {
            match i {
                Inner::A => println!("A"),
                Inner::B => println!("B"),
                Inner::C => println!("C"),
            }
        }
        // ...with this
        // Outer::Y(ref i) => {
        //     match *i {
        //         Inner::A => println!("A"),
        //         Inner::B => println!("B"),
        //         Inner::C => println!("C"),
        //     }
        // }
    }
}

fn main(){
    let x = Outer::Y(Inner::B);

    print(&x);
}

On stable, both version of this code compiles to the exact same binary (and also on 1.9.0 and 1.10.0, both old-trans and MIR on each version respectively). On beta and nightly, the commented out code becomes slower. My understanding is that this should compile to the same binary, so this is incorrect behavior.

The last correct version is rustc 1.12.0-nightly (7333c4ac2 2016-07-31), right before the update to LLVM 3.9.

EDIT: I noticed that the minified example does compile to the same binary (I somehow got confused) so I added the #[inline(never)].

A-LLVM C-enhancement I-slow P-medium T-compiler regression-from-stable-to-stable

Most helpful comment

It wasn't fixed in LLVM 4.0, but it might be fixed in 5.0, I'll check when the support is merged. The ergonomics improvements in pattern matching might also improve on this.

All 32 comments

The timing suggests this might be an LLVM issue. We're getting more and more of these lately, which is concerning.

Needs a P-tag.

@nagisa also theorized that this was an LLVM problem (aliasing?). Do we have an easy way to test for that?

Hmm. So I popped these examples into play and built the resulting assembly (on nightly, with optimizations enabled). They came out pretty similar:

lunch-box. diff ~/tmp/print-{ok,ref}
1c1
< .text
---
>     .text
12a13
>     andb    $3, %sil

Same results for beta:

--- /home/nmatsakis/tmp/print-ok-beta   2016-09-15 12:22:36.604531321 -0400
+++ /home/nmatsakis/tmp/print-ref-beta  2016-09-15 12:23:04.984530867 -0400
@@ -10,6 +10,7 @@
     .cfi_def_cfa_offset 80
     cmpb    $1, %dil
     jne    .LBB0_1
+    andb    $3, %sil
     cmpb    $1, %sil
     je    .LBB0_7
     cmpb    $2, %sil

Is it possible we fixed this? I know we did some LLVM fixing.

cc @eddyb, I think you managed most of that.

That said, stable builds are identical. Though I find it hard to imagine that the andb is really responsible for a massive performance drop. (But perhaps that causes problems in some larger scale scenario?)

It could easily be something else, identical binaries are no guarantee, so the reduction could be unrelated.
Using perf record on the original benchmark _could_ pinpoint that an andb instruction is responsible.

EDIT: as for LLVM, nothing I've seen would suggest any sort of improvements with switch.

Also, stable vs nightly look quite different, even if changing between the two versions of the arms makes no little difference.

@pmarcelll how confident are you that this reduction is accurate? can you make a version of the original benchmark available?

Here's the code: link.
This BF code is used for benchmarking.
EDIT: the slow version is in a comment

The problematic part is in the hottest code path, so the slowdown is really noticable.

Also, if the enum is not Copy or Clone, the ref version must be used, that compiles to the slightly slower version.

Discussed at compiler team meeting, but we didn't have a good answer for priority since investigation seems to be ongoing. P-high for now, feel free to downgrade if we get new info.

Results of some investigation: I can reproduce _a_ regression for this benchmark at the same point in the history, though its not as severe a regression.

Gist with the (slightly modified, self-contained) bf interpreter benchmark, and the transcript + summary of my runs: https://gist.github.com/pnkfelix/9e727884890b1ec867d58bd280cbbf82

Interestingly, other small perturbations also remove the regression. In particular, even just pulling the body of that match arm into a separate method and marking that method #[inline(always)] also removes the regression.

If anyone has access to LLVM 3.8 that they can build Rust with, it would help to confirm that this is _yet another_ LLVM 3.9 perf regression.
AFAICT, what #[inline(always)] ends up doing at the LLVM level is change the order and number of passes ran on the IR of the callee.

@eddyb You're in luck today! I finished building the same aarch64 nightly a little earlier against 3.9 and 3.8, and can confirm the difference: 2m41s vs 2m15s respectively.

BTW, make install doesn't work any longer, probably due to --disable-docs and this line.

@petevine Really useful, thanks! And so LLVM 3.9 delivers, yet another perf regression.

As a followup note regarding the gist I posted: I have done a little digging into the generated assembly for the Tape::run code, comparing the manually inlined to the LLVM-inlined codes (since those seem to end up being the most similar in output while retaining the bulk of the regression), and the main difference between the code for the two versions of Tape::run seems quite similar to the one that @nikomatsakis noted above.

manually inlined (slow)              |LLVM-inlined (fast)
---------------------------------------------------------------------------
.LBB7_2:                             |.LBB7_2:
        cmpb    $1, (%r14)           |        cmpb    $1, (%r14)
        je      .LBB7_5              |        je      .LBB7_31
        movb    4(%r14), %al         |        movl    4(%r14), %eax
        andb    $31, %al             |        addl    $-45, %eax
        addb    $-13, %al            |        cmpl    $17, %eax
        movzbl  %al, %eax            |        ja      .LBB7_5
        cmpb    $17, %al             |        movslq  (%r15,%rax,4), %rax
        ja      .LBB7_8              |        addq    %r15, %rax
        movslq  (%r15,%rax,4), %rax  |        jmpq    *%rax
        addq    %r15, %rax           |.LBB7_7:
        jmpq    *%rax                | 
.LBB7_12:                            | 

In particular: the slower version on the left has an extra andb instruction. (It also has that movzbl and is using the b suffixed instructions rather than l suffixed ones.)


Anyway, at this point I'm inclined to agree with @eddyb that this is probably an LLVM issue, not a Rust compiler one. We could investigate the order and number of passes, or we could try to file an LLVM bug, but either way, I think we should downgrade the priority of this ticket.

Inclined to agree.

triage: P-medium

this is probably an LLVM issue, not a Rust compiler one. We could investigate the order and number of passes, or we could try to file an LLVM bug

I think the path forward is to file an LLVM bug.

Next steps:

  1. Create isolated test case suitable for reporting to LLVM
  2. File bug there, ask LLVM devs for assistance

@eddyb notes that majnemer is often willing to do bisect against LLVM code base on small self-contained test case.

Any update on this? :)

It wasn't fixed in LLVM 4.0, but it might be fixed in 5.0, I'll check when the support is merged. The ergonomics improvements in pattern matching might also improve on this.

Is it fixed now? :)

I tried it a couple of days ago with the BF interpreter I linked earlier and it was slow no matter what compiler flag I used. As I wrote in the first comment, the benchmark completed just under 22 seconds, now it ran for 30 seconds with only a --release flag, and I couldn't push it under 25 seconds with other compiler flags. I didn't have time to further investigate, but I reran the tests today and I noticed that the nightly compiler produced identical binaries for both the slow and the fast version, so I put the code into the Godbolt assembly analyzer. According to the analysis, rustc 1.23.0 was the last compiler that produced different binaries, after that, the fast version regressed, and now it's identical to the slow version.

Any update on this?

I tried it when rustc was updated to LLVM 8 (so fairly recently), and now with the latest nightly. A plain release build produces a binary that completes the benchmark just under 27 second on my machine, with LTO it's close to 26, but reducing the number of codegen units or aborting on panic negatively impacts performance, it confuses the optimizer in LLVM somehow (it's not really a bug, fine-tuning an optimizer is hard), and because this program is basically a loop that performs a small set of operations over and over, every missed optimization is magnified.

All in all, rustc became slightly better recently, so the plain release version is fast, but this bug still slows down the test program.

I just checked it on nightly and it seems to be fixed (even on the latest beta). I don't know what changed, but I looked at the assembly on Godbolt and the previously slow part is now equivalent to the old version.

I even managed to get the lowest runtime so far, only 19.543 seconds, with
cargo rustc --release -- -Clto=fat -Ccodegen-units=1 -Ctarget-cpu=sandybridge -Cpanic=abort (for some reason, usually Rust binaries optimized for Sandy Bridge by LLVM run really fast on Haswell CPUs).

If this regression is unlikely to return and it was indeed fixed in LLVM then it can be closed.

I tried to bisect it, my program became fast again right after #55835 was merged, so it looks like it was indeed fixed by an LLVM upgrade. @nagisa can you confirm it by checking the LLVM bug report? Thanks.

The LLVM issue has been fixed by https://github.com/llvm-mirror/llvm/commit/238b8867a8e7a18e9cdf1c72b049d8bed200ee7a, which has tests and comments to prevent this from regressing without corresponding backend fixes. As such, I think we can consider this resolved...

@nikic I was thinking about the bug report in https://github.com/rust-lang/rust/issues/36283#issuecomment-249237923, it looks like that one can also be closed.

Was this page helpful?
0 / 5 - 0 ratings