We're running in 2.13.1 and we have a component template that has a pair of nested each blocks, with volumes of 53 and about 30 respectively, resulting in about 1500 iterations.
There are quite a few helpers in the markup, so the helpers themselves are being called a lot (for example, the if helper is being invoked about 10 times per iteration, or 14,000 times).
When we run in development mode, leaving the route and returning takes longer and longer, linearly (about 50-100ms each time) slowly building up to 2s, 3s, etc. It does not seem to matter what helpers are in use (and we are not using any custom helpers - btw it occurs even if we remove all helpers except for the two each helpers themselves, and even if we remove all HTML, so that only the each helpers are looping but doing nothing else within).
Using Chrome Profiler reveals that the culprit is a call stack that starts with runInDebug and then drills to compute, value, compute, compute, value, concat, compute, value... etc.
In production, there is no such issue (which makes sense - runInDebug is a NOP there).
Here is the profile (note that all code is in ember.debug.js or vendor.js but seems to be ember code).

I tried to debug into this but it's above my pay-grade w.r.t. ember :blush:
@DLiblik - Thanks for digging into this! It would be really helpful if you could create a reproduction demo of this so we can help figure out what is going on. I think either an ember-twiddle or full blown ember-cli app would work very well to demo the issue...
Yup great sleuthing. With a reproduction we can dive in and help figure out whatsup asap.
Ok let me see what I can put together - thanks.
Here is a test project, and an update:
The issue occurs on any high volume of custom helpers being rendered while in debug mode.
In the attached project, a simple loop repeats a helper that returns a fixed string, 2,500 times. You can flip back and forth in and out of the route using a provided link, and the console log will show the ever-increasing render time each time you leave and return to the route:

If you remove the my-helper helper from the big-list component template, then the problem goes away!
(Btw, I did the repro in 2.13.2 just in case somebody fixed it already - nope - still there)
Finally, it would be really cool (once the cause is identified) if the person who found it gave a quick 5-bullet-point summary of how the cause was found - I would have happily put my time there and identified the root cause if I could have debugged it down in-situ.
Seems to be fixed on master (which includes a large architecture refactor within glimmer-vm)

@rwjblue good to know - thanks for doing the test - I'll close this out
With @krisselden's help, we were able to identify the root issue. When a helper is invoked we pass in the arguments, those arguments are frozen (via Object.freeze) so that helpers can't mutate them (and cause issues in the rendering engine itself). When a helper doesn't have hash arguments, we use a shared EMPTY_HASH empty object to avoid allocating a bunch of empty objects for no reason (we do roughly the same thing when no positional params are passed). Since these objects are shared they are being frozen over and over again (throughout the lifetime of the running application). Turns out that there is what we think is almost certainly a bug in Chrome, where re-freezing the same object many times starts taking significantly more time upon each freeze attempt.
Demo snippet that shows the issue:
let o = Object.freeze(Object.create(null));
let a = Object.freeze([]);
function freeze() {
Object.freeze(a);
Object.freeze(o);
}
function timedFreeze() {
console.time('freeze');
for (let i = 0; i < 2500; i++) {
freeze();
}
console.timeEnd('freeze');
}
timedFreeze();
timedFreeze();
timedFreeze();
timedFreeze();
timedFreeze();
Submitted https://github.com/emberjs/ember.js/issues/15312 to work around this...
I do have a question @rwjblue - why does it not occur on master?
I believe that we no longer use a single shared EMPTY_XXX object. So each time we freeze a new object, which is "fine"...
I see - makes sense - thanks for looking in to this