Sentry-javascript: Infinite error reporting loop on HTTP Error (429)

Created on 22 Oct 2018  路  17Comments  路  Source: getsentry/sentry-javascript

My previous issue has been removed, not sure why

Package + Version

  • [x] @sentry/browser - 4.2.0
  • [x] @sentry/node - 4.2.0

Description

If request limit is hit with sentry, it enters infinite loop of reporting it's own HTTP Error (429) that it made to sentry

Confirmed Bug

Most helpful comment

@aguynamedben we'll release it Today/Tomorrow after merging some outstanding code. It'll definitely be done before the end of the week.

All 17 comments

Could you provide some workaround? I'd like to ignore this particular error to break the loop but the only option I can find in new sdk is blacklistUrls which blacklists url, not errors, and it is said to be not available for JavaScript SDK in docs: https://docs.sentry.io/learn/configuration/?platform=javascript

Can you post your setup, how does your Sentry.init look like?
I can't really reproduce this see:
Node: https://codesandbox.io/s/42z1088z3w
Browser: https://codesandbox.io/s/zkko3l484x

First of all, I don't understand we shouldn't send http errors to Sentry, at least on node.
And secondly our Dedupe Integration should also take care of this.

The issue is not about sending this to Sentry but infinite loop of sending this.

I've noticed that this loop happens only when using following config option:

  integrations: [new Sentry.Integrations.RewriteFrames({
    root: process.cwd()
  })]

so maybe Dedupe integration fails because stacktrace is modified. Here's reproduction:

var Sentry = require("@sentry/node");

class MyTransport {
  captureEvent(event) {
    console.log("here");
    return Promise.reject("429");
  }
}

Sentry.init({
  debug: true,
  dsn: "https://[email protected]/123",
  transport: MyTransport,
  integrations: [
    new Sentry.Integrations.RewriteFrames({
      root: process.cwd()
    })
  ]
});

Sentry.captureMessage("Test Message");

Also I'd like to notice that Dedupe integration should be irrelevant for this to work. I think that transport errors should be simply "manually" reported to sentry and not rethrown, this way they won't even have a chance to go into this loop and there's no need to fix dedupe implementation.

Another note: even without this integration error is reported two times instead of once. Here's reproduction:

var Sentry = require("@sentry/node");

class MyTransport {
  captureEvent(event) {
    console.log("here");
    return Promise.reject("429");
  }
}

Sentry.init({
  debug: true,
  dsn: "https://[email protected]/123",
  transport: MyTransport
});

Sentry.captureMessage("Test Message");
Sentry Logger [Log]: Integration installed: Dedupe
Sentry Logger [Log]: Integration installed: InboundFilters
Sentry Logger [Log]: Integration installed: FunctionToString
Sentry Logger [Log]: Integration installed: Console
Sentry Logger [Log]: Integration installed: Http
Sentry Logger [Log]: Integration installed: OnUncaughtException
Sentry Logger [Log]: Integration installed: OnUnhandledRejection
Sentry Logger [Log]: Integration installed: LinkedErrors
here
Sentry Logger [Error]: 429
here
Sentry Logger [Error]: 429

EDIT: never mind this just reports message first and 429 second, it's just Sentry Logger reports wrong name

I have found a workaround for now: just ignore 429 errors in beforeSend. it's a slightly shotgun approach, but makes the infinite retry loop go away.

Sentry.init({
  dsn: 'https://[email protected]/123',
  debug: true,
  beforeSend(event) {
    return event.message.match(/SentryError: HTTP Error \(429\)/) ? null : event;
  },
});

FYI we haven't confirmed that this issue is the cause, but after releasing a new version of our app that has this bug, we're seeing the app crash intermittently but reliably due to high CPU/memory usage over a long time. Memory usage becomes too high and eventually our app hard crashes because V8 kills it (V8::FatalProcessOutOfMemory).

In early debugging, the app crashes seem to coincide with periods where the 429 rate limiting issue is also reported in Sentry. I.e. infinite loop causes resource hogging, which in turn causes insane memory usage and then an app crash.

image

image

image

Is anybody else seeing this? It's very tricky to test because there's not a reliable way to put it in this state, as it depends on Sentry to be giving off 429s.

It's almost impressive how long the Electron app will run before it crashes... I think V8 is garbage collecting as aggressively as it can but eventually is overwhelmed. It's been running in this bad state on my local for like 20 minutes and still hasn't died.

Also, during this state, our app logs indicate that the business logic of our app is bored. Our app does real work every 30 minutes or so, and logs when that happens. When I tail the logs, our app is bored, but nonetheless running away with CPU/memory. That, combined with the fact that memory usage is all in the main process, where we do very little but sentry-javascript relays errors to Sentry servers, has me leaning toward the fact that the 429 infinite loop is the cause.

This has been merged but not released, any ideas when it will be released and integrated into sentry-electron?

@aguynamedben we'll release it Today/Tomorrow after merging some outstanding code. It'll definitely be done before the end of the week.

馃檹Thank you!!!

@aguynamedben sentry-electron 0.14.0 has been released with this fix.

We removed the hack and used the fix, and it seems to work well. thanks for fixing this fairly quickly.

Is this resolved for @sentry/node?

@maxpaj yes

Was this page helpful?
0 / 5 - 0 ratings