Current Behavior
I have a design that features a long running poller, with a retry operator to cope with intermittent failures in the case that the HTTP end point being requested goes down. I'm finding that if the poller fails for a long period of time, eventually the subscription dies anyways with https://repl.it/repls/HeartfeltWrongVendor#index.js.
Reproduction
https://repl.it/repls/HeartfeltWrongVendor#index.js
const { throwError } = require("rxjs");
const { tap, retry } = require("rxjs/operators");
let count = 0
throwError(new Error("testing")).pipe(
tap(undefined, () => console.log("received an error on attempt", ++count)),
retry(10000)).subscribe()
Expected behavior
I don't expect this to crash like this.
Environment
Possible Solution
I'm happy for a coding pattern solution, but so far have not been able to find one.
It's not a bug. I would expect the snippet you posted to behave the way it does, as it is synchronous. Essentially - with a sufficient number of repeats - it'll behave like this and will exhaust the stack:
function boom() {
try {
throw new Error("Boom!");
} catch (error) {
boom();
}
}
Use retryWhen to make the retries asynchronous - by imposing a delay - and use take to limit the number or retries:
throwError(new Error("testing")).pipe(
tap(undefined, () => console.log("received an error on attempt", ++count)),
retryWhen(errors => errors.pipe(
delay(/* some delay */),
take(10000)
))
).subscribe()
@cartant thanks for a fantastically clear and helpful reply. I agree that this isn't a bug... just an understanding gap.
I really appreciate you taking the time to answer this for me.
@cartant another query on this...
I've adjusted my demo case to be a little closer to what I am doing in the prod code (basically, I do a fetch call, and if it succeeds I emit true, and if it fails I emit false). The code is now something like:
const { throwError, concat, of, timer } = require("rxjs");
const { tap, catchError, switchMapTo, mapTo } = require("rxjs/operators");
let count = 0
throwError(new Error("testing")).pipe(
mapTo(true),
tap(undefined, () => console.log("received an error on attempt", ++count)),
catchError((_, source) => concat(of(false), source))).subscribe()
This still hits an eventual call stack exception, and I am struggling to figure out how to avoid this exhaustion. Might you have any ideas?
It still synchronously resubscribes to the source within the catchError. You can either delay the resubscription or perform the resubscription on a scheduler, e.g.:
catchError((_, source) => concat(of(false), source).pipe(
subscribeOn(asapScheduler)
))
Hrm. That's what I thought as well, but unfortunately this still hits an error using this code:
const { concat, of, timer, defer, asapScheduler } = require("rxjs");
const { tap, catchError, switchMapTo, mapTo, subscribeOn } = require("rxjs/operators");
let count = 0
defer(async () => { throw new Error("testing")}).pipe(
mapTo(true),
tap(undefined, () => console.log("received an error on attempt", ++count)),
catchError((_, source) => concat(of(false), source).pipe(
subscribeOn(asapScheduler)
))).subscribe()
Eventually blows up like
received an error on attempt 3047
received an error on attempt 3048
received an error on attempt 3049
received an error on attempt 3050
received an error on attempt 3051
received an error on attempt 3052
received an error on attempt 3053
received an error on attempt 3054
received an error on attempt 3055
received an error on attempt 3056
received an error on attempt 3057
received an error on attempt 3058
RangeError: Maximum call stack size exceeded
at SimpleInnerSubscriber._error (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/innerSubscribe.js:29:55)
at SimpleInnerSubscriber.Subscriber.error (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/Subscriber.js:72:18)
at CatchSubscriber.SimpleOuterSubscriber.notifyError (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/innerSubscribe.js:72:26)
at SimpleInnerSubscriber._error (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/innerSubscribe.js:30:21)
at SimpleInnerSubscriber.Subscriber.error (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/Subscriber.js:72:18)
at MergeMapSubscriber.SimpleOuterSubscriber.notifyError (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/innerSubscribe.js:72:26)
at SimpleInnerSubscriber._error (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/innerSubscribe.js:30:21)
at SimpleInnerSubscriber.Subscriber.error (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/Subscriber.js:72:18)
at CatchSubscriber.SimpleOuterSubscriber.notifyError (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/innerSubscribe.js:72:26)
at SimpleInnerSubscriber._error (/home/runner/HeartfeltWrongVendor/node_modules/rxjs/internal/innerSubscribe.js:30:21)
Perhaps it's because catchError(...) is capturing the closure scope there or something?
I've managed to concoct one that doesn't hit this limit by creating a custom catchAndRetry operator that allows a value to be emitted before retrying. Code is this:
function catchAndRetry(errorValue) {
return (source) =>
new Observable((subscriber) => {
let sub;
function subscribe() {
sub = source.subscribe({
next: (v) => subscriber.next(v),
error: () => {
if (errorValue != undefined) subscriber.next(errorValue);
subscribe();
},
complete: subscriber.complete.bind(subscriber),
});
}
subscribe();
return { unsubscribe: () => sub.unsubscribe() };
});
}
The problem does not lie with catchError. The problem is that using this in the handler effects the overflow:
concat(of(false), source)
Each retry has an additional of(false) prepended and, eventually, there's a sufficient number of observables to overflow the stack when the concat call is made.
Sorry... didn't mean to imply catchError (or rxjs) was in any way at fault here. Just trying to understand why I was hitting this and how to best avoid it 馃槂
Thanks again for your time answering this @cartant !
No worries. I've uncovered bugs before through investigating seemingly inexplicable behaviour. I was curious to see what was happening.
Most helpful comment
It's not a bug. I would expect the snippet you posted to behave the way it does, as it is synchronous. Essentially - with a sufficient number of repeats - it'll behave like this and will exhaust the stack:
Use
retryWhento make the retries asynchronous - by imposing a delay - and usetaketo limit the number or retries: