Node: Terminate process on unhandled promise rejection

Created on 29 Apr 2018  ·  104Comments  ·  Source: nodejs/node

  • Version: v11.0.0-nightly2018042865d97c96aa
  • Platform: all
  • Subsystem: Promise

When a Promise is rejected and has no catch handler, the following warning is displayed on stderr:

$ node -p "Promise.reject()" Promise { <rejected> undefined } (node:15518) UnhandledPromiseRejectionWarning: undefined (node:15518) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) (node:15518) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. $ echo $? 0

This behavior is in place for quite some time now I believe, and maybe it would be time to actually make node exit on a non-zero code.

promises

Most helpful comment

Was going to file a separate issue but this one already existed...

Due to #26599 we now will have a flag going forward --unhandled-rejections with three levels: none warn and strict to allow for configuring the behavior. The default is to use warn, which is same as current behavior (I would expect none to be for people who know what they're doing!) whereas strict would produce an uncaughtException.

I am proposing that strict become the default at some point as even thought this changes the expected behavior it also discourages lazy coding or naive coding which I have seen confuse numerous engineers who were not privy to the current non-intuitive behavior.

@mcollina, consider this my follow-up. :-)

All 104 comments

@aduh95 thanks for your interest! there is ongoing discussion about this (from this week) in https://github.com/nodejs/node/pull/20097

Your participation and review there are more than welcome. Please note that I've found it a rather complex subject that appears simple and I really want to make sure that we get it right.

  • We can't just throw on it either due to the rejectionHandled event and legitimate use cases.
  • We want to allow people to turn off unhandled rejection detection completely for language semantics.
  • We can't just turn off the current "unhandledRejection" system since GC semantics for garbage collection have false negatives (where the promise is retained rejected in a global variable for instance)
  • We need to enable people detect if an exception is from a rejection or not - or at least add somthing that allows us to achieve opt out.
  • Performance needs to be reasonable and useful.

I believe the work @BridgeAR has done in #20097 (and the prior work by @Fishrock123 ) is excellent and we're moving towards a solution to all these problems:

  • continue emitting unhandledRejection events and log but without the deprecation warning.
  • exit the process if no unhandledRejection handler is attached and eventually if one is (with a migration path outlined in the PR).

When testing this a while back - it proved to be a significant improvement of UX.

IMHO it would be better to throw an unhandled exception instead of immediatelly terminating the process. The exception could then be processed in process.on('uncaughtException') which is usually used for cleanup and similar stuff. Terminating the process disables the possibility of performing a cleanup before exit.

@advanceddeveloper

The exception could then be processed in process.on('uncaughtException') which is usually used for cleanup and similar stuff.

That's the plan, and what we decided to do together in https://github.com/nodejs/node/pull/20097 - please consider going over the semantics there and leaving a comment - we are very open for feedback. Note it's a pretty lengthy read - sorry for that - I tried to outline most of it in https://github.com/nodejs/node/issues/20392#issuecomment-385241936 but I missed some stuff like uncaughtException

Hi,
I'm not sure if this is the right thread for my issue, but I was searching for a place to discuss my afraidness about the deprecation warning from point of view of a consumer.
I have got some async tasks, which I compute in parallel and collect the promises in an array. Then with

Promise.all([...]).then(function(results) {
  ...
}).catch(function(err) {
  ...
});

I do some work, after all promise fullfilled and catch if one of the promises where rejected.
But I do get the deprecation warning with this coding stile and now am afraid, when time comes and my process gets terminated. I believe I catch the error and do not await an UnhandledPromiseRejectionWarning with a global catch.

Hi, it is impossible for the code above to generate an unhandled rejection since you are handling it (in a .catch).

Feel free to make a gist with an mcve and I promise to look into it.

In any case - terminating the process only happens on GC when Node can _prove_ that the rejection will never be handled.

@benjamingr Hi, you're right. I checked this situation again and realized I missinterpreted it. Thank you for claryfying, that it is impossible! It makes future searches on deprectaion warning easier becaurse of knowing the promise chain has to be someware broken.

I want to add a quick note of caution before things proceed too fast. This breaks a really important feature of the Promise spec, which is the ability to defer handling until later. I understand the rationale behind the change, and I'm not really against it... but I think it's important to communicate how to work around this change.

There are many useful patterns that use deferred handlers, and they're employed across hundreds of packages. These will start crashing with this change, and I think it's important that maintainers can quickly fix these without scouring the internet.

I was writing demos here (as this issue was pretty easy to find), but it got a bit long, so I moved it to medium.

EDIT: This is particularly important for libraries that support both callback and (native) PromiseAPIs, as any app using callbacks will begin crashing on each rejection!

@mike-marcacci i just read your article but this is not how things are planned to work in Node.js 11. It is not even clear if anything changes in 11 but one thing is clear: the default would not crash with your example. In this case nothing will change besides not receiving a deprecation warning anymore. The reason is that it is possible to either have false negatives or false positives and the default should be safe to use.

There are many useful patterns that use deferred handlers, and they're employed across hundreds of packages.

We've actually surveyed the ecosystem quite extensively and have found it to be used very rarely in the wild if at all. In fact, people seldom silence the unhandled rejection warnings at all - a lot of people just terminate the process on first unhandledRejection.

Also, like @BridgeAR said - none of your examples crash in Node.js 11, all of them work as you intend. Node.js 11 promises will (hopefully) crash on GC.

Hey @BridgeAR and @benjamingr – thanks for your replies. I deleted that post shortly after your comments to avoid sowing misinformation. I'm really glad to hear I had misinterpreted the deprecation message, but I'm quite sure I'm not alone in that interpretation.

Is the warning implemented differently than the planned crash? You mentioned that this is safe:

const promise = Promise.reject(new Error("Something happened!"));
setTimeout(async () => {
  // You want to process the result here...
  try {
    const result = await promise;
    console.log(`Hello, ${result.toUpperCase()}`)
  }
  // ...and handle any error here.
  catch (err) {
    console.error("There was an error:", err.message);
  }
}, 100);

...but it currently warns that this behavior is deprecated and will crash in future versions of node.

@mike-marcacci that's a bug in our current implementation that Ruben's work will solve.

Note that by all means you _should_ not write code like this, the fact we _alllow_ it to error on the safe side as a platform doesn't mean it's a good idea in general - it makes tooling a lot harder and it will still warn (just without the deprecation warning). This sort of code is risky IMO

@benjamingr - thanks for confirming this. I agree that my above code is totally unnecessary, but it illustrates the mechanics behind more complex patterns that are extremely difficult to implement without the Promise API.

While I totally believe you've done your due diligence in surveying the ecosystem, I personally have used these patterns in a number of different projects (both open and closed source), and there are cases where they will crash even on GC. Is my "hack" still a valid technique for bypassing this behavior when I'm absolutely sure it's the right thing to do?

const promise = Promise.reject(new Error("Something happened!"));
promise.catch(() => {})

I certainly don't want to promote an anti-pattern or mislead people into thinking that this technique is generally a good idea; but at the same time, I want to make sure developers are equipped to handle this change when it's appropriate.

its worth noting that attaching catch to promises at any time is part of the design, not by accident. i wouldn't consider a rejection truly unhandled unless the promise was gc'd without being handled. (which i think is what @BridgeAR's pr does?)

Is my "hack" still a valid technique for bypassing this behavior when I'm absolutely sure it's the right thing to do?

Yes. That is entirely appropriate and the way to do a "late binding" of a catch handler. I have to say that I expected some 'noise' when we added the unhandled rejection warnings by default of people complaining about it and no one has. In practice almost no one does this (although like they are allowed). Like subclassing promises - this is something people _can_ do but ended up not being worth it.

its worth noting that attaching catch to promises at any time is part of the design, not by accident.

I don't think we'd have this API if we did this today - things looked very different when the aplus spec was made. Node is entirely at liberty to limit the design the same way we crash the process on errors (which browsers do not).

which i think is what @BridgeAR's pr does?

That is correct

Hi All,

I am having the same problem, i am new on ionic. My project is running well on simulator browser but when i try to build the project i have the same error. I check my code i am using cath in all promisses.

How to solve this?

(node:77411) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:77411) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

@flaviomx14 hey, this issue is not for support on Node.js problems. I recommend asking on https://github.com/nodejs/help/issues. Namely this issue tracker is for Node.js core development and not for support (there is a separate one for that which you're very welcome to use).

I'm going to go ahead and hide your comment as "off topic" and welcome you to post in nodejs/help. Cheers.

Was going to file a separate issue but this one already existed...

Due to #26599 we now will have a flag going forward --unhandled-rejections with three levels: none warn and strict to allow for configuring the behavior. The default is to use warn, which is same as current behavior (I would expect none to be for people who know what they're doing!) whereas strict would produce an uncaughtException.

I am proposing that strict become the default at some point as even thought this changes the expected behavior it also discourages lazy coding or naive coding which I have seen confuse numerous engineers who were not privy to the current non-intuitive behavior.

@mcollina, consider this my follow-up. :-)

i'm still not convinced that it is the runtime's job to deal with "lazy" or "naive" coding. i mean, it doesn't warn every time you use == or call hasOwnProperty directly on an object... i don't see how this is any different. the runtime should just be focused on being correct (and in this case dying on unhandled rejections would mean some valid js code wouldn't run in node by default, which seems undesirable)

@devsnek

some valid js code wouldn't run in node by default, which seems undesirable

I disagree. Some times things that _are_ possible should AFAIC only be possible as an opt-in. This is especially important since the language is used in lots of different contexts and as server-side language it's normally best to better be safe than sorry.

I believe we should have defaults that serve the majority of our users best and in this case I am very certain that most people prefer to use the strict version as default. That way less people have to make sure they actually change the default. Ideally we'll have a small survey for it to have actual proof for it.

I'll see if I add another short session for this topic at the collaborator summit.

Sorry for the noise, but I do not know where the documentation for recommended patterns to avoid triggering the message falsely are. I am concerned that this pattern is still unsafe with the new behavior:

async function doSomethingAsync(x) {
    throw new Error(`fail ${x}!`);
}
(async () => {
  const doSomething0Promise = doSomethingAsync(0);
  const doSomething1Promise = doSomethingAsync(1);
  try {
    console.log(`result 0: ${await doSomething0Promise}`);
    console.log(`result 1: ${await doSomething1Promise}`);
  } catch (ex) {
    console.error(`failure: ${ex}`);
    process.exitCode = 1;
  }
})();

I value this pattern because it allows concise concurrency. I recognize that, in many cases, the rejections will not be “handled”. And the IIFE will exit without waiting for all promises to complete (making it different from Promise.all()) and I will only see the first awaited rejection/error (same as Promise.all()). But if I use it, I get the demonstrated rejections. Is there a solution for this pattern that doesn’t sacrifice concision or introduce complicated synchronization?

@binki This allows concurrency, but waits for all promises to resolve/reject:

const all = ps => new Promise((res, rej) => {
  const arr = ps.map(() => null);
  let n = 0, ok = 1, err;
  ps.forEach((pr, i) => {
    pr.then(r => arr[i] = r)
      .catch(e => ok && (ok = 0, err = e))
      .finally(() => {
        if(++n === arr.length)
          ok ? res(arr) : rej(err);
      });
  });
});

const doSomethingAsync = async x => {
  throw new Error(`fail ${x}!`);
};

(async () => {
  let results = all([
    doSomethingAsync(0),
    doSomethingAsync(1),
  ]);
  try{
    results = await results;
    console.log(`result 0: ${results[0]}`);
    console.log(`result 1: ${results[1]}`);
  }catch(ex){
    console.error(`failure: ${ex}`);
    process.exitCode = 1;
  }
})().catch(console.log);

@blinki that code has a real big - if the second function rejects you don't handle the error which has big concequences

Consider Promise.all

@Hakerh400 @benjamingr In both of your solutions, you change the behavior of my code. You force the first console.log(`result 0:…`) to wait for doSomethingAsync(1) to complete needlessly. Also, you’re sacrificing concision. The only difference between using Promise.all() (such as in a finally {…} block) (with all the pomp and circumstance doing so requires) and my code is that my code’s function completes its Promise prior to the completion of doSomethingAsync(1) (but only if doSomethingAsync(0) throws).

I can see how my function not waiting for all of the Promises it is responsible for to run to completion is an issue, however. It is what people expect and you might assume that, once a function completes its Promise, it will no longer be accessing a shared resource such as a passed-in object. So I will try to avoid that pattern even though it is easier to write and more concise.

Since this conversation has been hitting my inbox again, I want to reiterate how bad of an idea I think this change is. There are _many_ scenarios in which a library might want to return a promise that the user may choose to handle or not.

  • Many projects (including those I've authored) are designed to work with promise-based patterns _and_ callback patterns; this change would cause crashes for anyone following the callback style, as the returned promises are unhandled.
  • Many projects (including those I've authored) have functions that are "best attempt" asynchronous side effects, like logging or unlocking a resource, where the vast majority of use cases don't need to wait for a success or failure.
  • Anywhere that user-land resource pooling exists, it is possible to discard unused promises that are processing something in the background. If this proposal is fully implemented, we will end up with this garbage everywhere.

Fundamentally, I think promise rejection is substantially different than "throwing" under normal synchronous flow.

Let's take this for example:

function a() { console.log("a"); return 1; }
function b(input) { throw new Error("oops"); return input + 2; }
function c(input) { console.log("c"); return input + 3; }

function run() {
  const resultA = a();
  const resultB = b(resultA);
  return c(resultB);
}

console.log("before");
const result = run();
console.log("after", result);

It's abundantly clear how the throw propagates. The behavior is deterministic and _necessary_, as we were unable to define resultB.

This scenario is very similar:

async function a() { console.log("a"); return 1; }
async function b(input) { throw new Error("oops"); return input + 2; }
async function c(input) { console.log("c"); return input + 3; }

async function run() {
  const resultA = await a();
  const resultB = await b(resultA);
  return c(resultB);
}

console.log("before");
const result = run();
console.log("before 2");

...but has a critical difference: the expression console.log("before 2"); does not and cannot depend on the resolved value result. The throw propagates through all chained promises, and when it stops, there is no remaining undefined behavior! No piece of code is left in an unclear state, and therefore there is no reason to crash.

To be perfectly frank, this proposal seems far more about creating the appearance of safety than addressing an actual deficit in application correctness. I'm not questioning the value in detecting unhandled promises (resolved OR rejected) as a development tool for calling attention to a potentially undesired flow... but just like other lint rules, this belongs in tooling and NOT the execution environment.

Just my 2 cents.

Many projects (including those I've authored) are designed to work with promise-based patterns and callback patterns; this change would cause crashes for anyone following the callback style, as the returned promises are unhandled.

A better pattern is to not return a promise if a callback is passed in. If the internals themselves use a promise, do .catch(callback) and return void. If the internals use callbacks, there's no need to create a promise. Either way, there's no risk of an unhandled rejection.

@binki Your "pattern" is perfectly reasonable IMO, but my point was that there is no special functionality that Node.js needs to implement in order to make your use case work properly. The current Promise API is enough. I can't tell for sure what exactly you want to achieve, but I guess you want to:

  • Call multiple async functions in order to schedule parallel asynchronous actions
  • Obtain results of resolved promises without waiting for all promises to finish
  • Rejected promises should wait until all other promises resolve
  • The order of promise resolution does not matter (otherwise your example is in contradiction with your requirements)

The following example ecompasses all of the four requirements above:


Click to expand

const iterate = ps => ({
  [Symbol.asyncIterator](){
    const r = [], f = [], e = [];
    let n = 0;

    ps.forEach((pr, i) => {
      pr.then(val => r.push([val, i]))
        .catch(err => e.push([err, i]))
        .finally(() => { n++; next(); });
    });

    const next = () => {
      if(f.length === 0) return;

      const [res, rej] = f;

      if(r.length === 0){
        if(n === ps.length){
          f.length = 0;
          rej(e);
        }
        return;
      }

      f.length = 0;
      res({
        done: r.length === 0 && e.length === 0,
        value: r.shift(),
      });
    };

    return {
      next(){
        return new Promise((res, rej) => {
          f.push(res, rej); next();
        });
      },
    };
  },
});

const doSomethingAsync = async x => {
  if(x === 51 || x === 75) throw new Error(x);
  return new Promise(r =>
    setTimeout(() => r(x), Math.random() * 1e3),
  );
};

(async () => {
  const promises = [];

  for(let i = 0; i <= 100; i++)
    promises.push(doSomethingAsync(i));

  try{
    for await(const [result, i] of iterate(promises))
      console.log(`result ${i}: ${result}`);
  }catch(errs){
    for(const [err, i] of errs)
      console.log(`failure: ${err} (at index ${i})`);
    process.exitCode = 1;
  }
})().catch(console.log);

Also, you’re sacrificing concision.

If you look closely at the example, the main code is actually very simple and concise. The most complex part is the function iterate, but you need to define it only once and then simply use it in your code.

In this example, we spawn 101 promises. All promises resolve after a random timeout between 0ms and 1000ms, except when x is 51 or 75, in which cases we throw an error without any timeout. This is handled in the following way: as soon as a promise resolve, you can process the result immediately, but the order is not preserved (the parameter i reveals the index).

You'll probably not find a simpler solution. Also, if this is not exactly what you want (for example if you want to ignore other errors as soon as a single rejection happens, etc), you can easily modify the iterate function to achieve that.

@vweevers I totally agree in terms of best practice, and that's why I think this kind of behavior makes sense as some kind of development tooling. I am all in favor of an event for the garbage collection of _any_ unhandled promise (resolved or rejected) and its use by test runners. (I believe this already exists, at least for unhandled _rejections_.)

The problem with this proposal is that it is a serious change to the execution environment. For example, variable shadowing is another perfectly valid feature in javascript that is avoided by best practice:

var foo = "foo";
function () {
  var foo = "bar";

  // ...many lines of code

  console.log(foo);
}

While this is perfectly valid, we know from experience that this can be tricky for humans, which makes the pattern prone to mistakes. However, it doesn't make the code _incorrect_ or _unsafe_. This kind of scenario is exactly why we have tooling like linters. Sure, node (well, V8 really) could decide that shadowing is potentially too problematic and decide to stop supporting it... but that would break lots of perfectly valid code.

At the end of the day, it just isn't the role of a "javascript execution environment" to make these kind of decisions. It _is_ its role to provide potentially opinionated tools the ability to inspect execution, and that's something I'm totally for... I just don't see any actual value in crashing perfectly valid code that's following unambiguous and well understood rules... especially in response to unpredictably timed events (garbage collection) 🤷🏻‍♂️

Interestingly, I made the exact same point a while back (although I used == as my example), and it didn't seem to convince those pushing for terminate by default.

I really really like the current default of warning. I've made two language VMs with promises, where I found that not having those warnings is rather annoying.

Even if terminate is the default there is no reason why you can't do all the things you describe above by recreating the current implementation of the unhandledrejection event. The biggest problem about the current implementation is not about the specific use-cases you are describing but rather about it not making sense at a core level. The following errors are handled differently:

function foo() {
  throw new Error("I terminate the process unless I'm caught");
}
async function foo() {
  throw new Error("I don't terminate the process because I'm thrown in an async function");
}

This is a fundamental difference between things that look like they should do the same thing.

I'm not denying that the current implementation can be useful, just like it can be useful to not terminate on an unhandled sync error. But as a default, it's bad language design. An uncaught error should always terminate the process. It should not matter if it's a sync error or an async error (which we sometimes call a rejection).

@binki your current code has a very real bug. You are welcome to see the talk ruben gave about it as well as the use-cases repo.

The correct way to do so safely in a way that does not endanger the process is:

async function doSomethingAsync(x) {
    throw new Error(`fail ${x}!`);
}
(async () => {
  const doSomething0Promise = doSomethingAsync(0);
  const doSomething1Promise = doSomethingAsync(1).catch((e) => {
     // deal with the possible rejection - so that you don't leak resources
  });
  try {
    console.log(`result 0: ${await doSomething0Promise}`);
    console.log(`result 1: ${await doSomething1Promise}`);
  } catch (ex) {
    console.error(`failure: ${ex}`);
    process.exitCode = 1;
  }
})();

Or as suggested before with Promise.all (you can log for example in a then of the first promise without awaiting it).

Users posting these sort of cases are actually a really strong motivation for us to do this. This is _not_ a false positive do _not_ ignore this warning it can cause terrible cascading failures on your servers if ignored like uncaughtException.


@mike-marcacci :

Many projects (including those I've authored) are designed to work with promise-based patterns and callback patterns; this change would cause crashes for anyone following the callback style, as the returned promises are unhandled.

That is pretty easy to deal with from the project's side in several ways - for example:

  • Either don't reject the promise if a callback is passed in.
  • Add an empty p.catch(() => {}) to it to mark it as handled.

Many projects (including those I've authored) have functions that are "best attempt" asynchronous side effects, like logging or unlocking a resource, where the vast majority of use cases don't need to wait for a success or failure.

That is absolutely fine, then don't cause an exceptional state there. This is literally what exceptional means :] Just don't reject in your logs. This is exactly like a synchronous log throwing an uncaught exception. If you are worried you can .catch those promises as well.

Anywhere that user-land resource pooling exists, it is possible to discard unused promises that are processing something in the background. If this proposal is fully implemented, we will end up with this garbage everywhere.

I disagree with calling it "garbage", if you have an error which is uncaught and you don't want to handle it - you have to be explicit. You could write a helper for this. We even added one to bluebird at a point.


Fundamentally, I think promise rejection is substantially different than "throwing" under normal synchronous flow.

I find it hard to sympathise with this view point since async/await.

...but has a critical difference: the expression console.log("before 2"); does not and cannot depend on the resolved value result. The throw propagates through all chained promises, and when it stops, there is no remaining undefined behavior! No piece of code is left in an unclear state, and therefore there is no reason to crash.

This is precisely the same case as uncaught exceptions. In that particular case there is no issue in not crashing on uncaught exceptions as well.

But as a default, it's bad language design.

But who is Node.js to decide to change that. Node is some utils on top of JavaScript, and JavaScript decided that chains of promises die, not the entire agent. I think the tool of "die on promise rejection" can be useful in some cases, but it should never be the default.

Hi @benjamingr thanks for your comments.

Fundamentally, I think promise rejection is substantially different than "throwing" under normal synchronous flow.

I find it hard to sympathise with this view point since async/await.

But these _are_ entirely different things – don't let the syntactic sugar confuse you. A synchronously thrown error propagates up a _single_ context until it hits the stack root, where node crashes to prevent undefined behavior. A promise, on the other hand, can be forked any number of times and is designed to be idempotent and free of side-effects. Dependent paths are not defined by a semicolon, but by callbacks chained with .then, .catch, and .finally, even if the async/await syntax is used. If there is nothing chained with .catch, then we can be certain that there are no dependent paths, and so crashing the entire application does nothing for ensuring correctness; it only serves to crash nondependent execution paths still running in the event loop.

Consider this example:

async function foo () {
  const a = await someAsyncFunction();

  // This function is unhandled here but returns a promise.
  someOtherAsyncFunction();

  return a;
}

By the time the promise returned by someOtherAsyncFunction gets rejected, garbage collected, and crashes the app, foo _has already returned_. If the goal in crashing is to prevent execution of behavior that _may_ have been intended to be marked as dependent on the promise, then we've failed at the goal, since the behavior _has already happened_.


Finally, it's extremely odd to me that we're just talking about the case of promise _rejection_. Even though I adamantly disagree with this proposal, it seems internally inconsistent that this proposal would not crash on the garbage collection of an unhandled _resolved_ promise. From the perspective of node, this is no different a scenario, as the user-land code had no way of knowing whether it was rejected or resolved; it simply never attached a handler.

This observation reinforces my opinion that the goal of this proposal is ambiguous. Can anyone lay out what the problem is that's trying to be solved here – not an "effect" of the proposed change, but the actual problem for which this proposal was designed? To me, this proposal is a solution looking for a problem.


EDIT: Rereading this I just want to disclaim the sharpness in its tone; while I stick to my points, it does sound conspicuously like my pre-coffee brain... 😬To be clear I'm completely under the assumption that this proposal was put forth with the best intentions, even if I disagree with it.

Hi, thanks all for your responses.

@Hakerh400

@binki Your "pattern" is perfectly reasonable IMO, but my point was that there is no special functionality that Node.js needs to implement in order to make your use case work properly. The current Promise API is enough. I can't tell for sure what exactly you want to achieve, but I guess you want to:

  • Call multiple async functions in order to schedule parallel asynchronous actions

  • Obtain results of resolved promises without waiting for all promises to finish

  • Rejected promises should wait until all other promises resolve

I kind of want this. But this is hard to get with the Promise API because of how Promise.all() works. See below in my response to benjamingr.

  • The order of promise resolution does not matter (otherwise your example is in contradiction with your requirements)

Right. If I call multiple APIs that return Promises, they are independent actions. There’s no way (without making the APIs really complicated) to require one to resolve prior to another. However, the order that I utilize the resolved values of promises does matter. This is what async/await syntax is great for. You can transform code which needlessly uses something like Promise.all() with code which awaits exactly the values you need when/where you need them. If you are using Promise.all() for anything other than list/batch process, you are likely micromanaging asynchronous operations needlessly.

The following example ecompasses all of the four requirements above:
«snip»
You'll probably not find a simpler solution. Also, if this is not exactly what you want (for example if you want to ignore other errors as soon as a single rejection happens, etc), you can easily modify the iterate function to achieve that.

Your solution offers what Promise.all() doesn’t—waiting for all Promises even if one is rejected. It seems like something which should be provided by the ES standard itself because many people would want it because it provides resource utilization backpressure(?).

I am sorry, but I do feel like I need to offer a nitpick. If construction of the list of Promises fails partway, before your iterator() gets to it (for example, if doSomethingAsync() were implemented without using async and directly threw instead of returning a rejected Promise), then any earlier rejected Promises in the list would be unobserved.

I found a simpler solution which doesn’t have that issue:

function markCaught(promise) {
  promise.catch(() => {
  });
  return promise;
}
const p1 = markCaught(doSomethingAsync());
const p2 = markCaught(doSomethingElseAsync());
if (await p1) {
  // e.g., no point in waiting for p2 if you don’t care about its result (would be nice to cancel it, though)!
  doSomethingWith(await p2);
}

@benjamingr

@binki your current code has a very real bug.

May you please expound on what that bug is?

Consider Promise.all

I have considered the behavior of Promise.all().

  • Marks all Promises passed to it as “rejection handled”.
  • Resolves only if and once all Promises passed to it resolve.
  • Rejects immediately on the first Promise passed to it that rejects.

Note how Promise.all() handles rejection. It stops waiting for Promises passed to it to resolve or reject once any one of them rejects. Thus, if you use Promise.all() and one of the Promises passed to it rejects, your code will exit prior to the other Promises passed to it rejecting.

These two code chunks are functionally equivalent:

const promises = [];
try {
  const promise1 = doSomethingAsync();
  promises.push(promise1);
  const promise2 = doSomethingElseAsync();
  promises.push(promise2);
  doAnotherThing(await promise1);
  doYetAnotherThing(await promise2);
} finally {
  await Promise.all(promises);
}
const promise1 = doSomethingAsync();
promise1.catch(() => {
});
const promise2 = doSomethingElseAsync();
promise2.catch(() => {
});
doAnotherThing(await promise1);
doYetAnotherThing(await promise2);

Thus, my workaround is markCaught() (see above) :

You are welcome to see the talk ruben gave about it

At your recommendation, I watched this video. I feel like I did not get a clear explanation of why I would want to handle rejections that I don’t care about from the video. My intuitive understanding is that, yes, awaiting things is critical. However, if dealing with independent processes and I have an opportunity for concurrency where 1. I need all requirements to succeed before I can continue 2. any one of these requirements failing prevents me from using any of the information loaded by the other requirements, there’s no reason to care about the rejections beyond the first encountered one. This is actually exactly the semantics Promise.all() gives you if it fits your problem (i.e., if you are dealing with a collection of “similar” things or you simply have a list of processes which must proceed but do not need to collect/collate/compile the results of any of them together).

To sum, if I don’t await something, then there are two possibilities. Either I have a bug resulting in a race condition where I forgot to await something. Or I started an asynchronous process which would have been useful to me except some other asynchronous process which I required failed, preventing me from ever awaiting the completion of the one asynchronous process. Or I had optimistically started the operation but due to a condition check it was found I didn’t actually need to utilize the result of the operation.

Yes, ideally, I might wait for all of the asynchronous processes I launched to run to completion prior to resolving or rejecting the Promise returned by the function I am authoring (likely implicitly via async). However, Promise.all() does not support that style. I.e., it behaves quite differently from .net’s Task.WhenAll() which does wait for all Tasks to run to completion.

It seems like the unhandled rejection warning, which I do like, is not so much about trying to get developers to handle rejection as it is to try to make sure people don’t forget to await things. Would be nice to have the option to crash the process when a Promise resolves without a success handler (except, of course, for the top-level stuff). I say this somewhat in jest and somewhat seriously.

as well as the use-cases repo.

This use-case matches my coding pattern, especially coming from .net which used to crash the process by default but no longer does. Why doesn’t the use-case explain why it is considered bad and provide refactoring suggestions?

The correct way to do so safely in a way that does not endanger the process is:

async function doSomethingAsync(x) {
    throw new Error(`fail ${x}!`);
}
(async () => {
  const doSomething0Promise = doSomethingAsync(0);
  const doSomething1Promise = doSomethingAsync(1).catch((e) => {
     // deal with the possible rejection - so that you don't leak resources
  });
  try {
    console.log(`result 0: ${await doSomething0Promise}`);
    console.log(`result 1: ${await doSomething1Promise}`);
  } catch (ex) {
    console.error(`failure: ${ex}`);
    process.exitCode = 1;
  }
})();

I would not like to swallow the exception. I want it thrown by my await doSomething1Promise—but only if that line is reached. My above solution, markCaught(), provides this behavior which I consider much better than swallowing the exception and which is indistinguishable from the Promise.all() approach.

Your solution offers what Promise.all() doesn’t—waiting for all Promises even if one is rejected. It seems like something which should be provided by the ES standard itself because many people would want it because it provides resource utilization backpressure(?).

It's already there, it's Promise.allSettled().

$ node --version
v12.13.0

Regardless of deferred promise handling concerns. If, by default, Promise based code can behave as non Promise code when it comes to dumb mistakes, such that the process terminates, that would be a happy situation. Please return to this by default:

$ node -p "const {blah} = null"
[eval]:1
const {blah} = null
       ^

TypeError: Cannot destructure property `blah` of 'undefined' or 'null'.
    at [eval]:1:7
    at Script.runInThisContext (vm.js:116:20)
    at Object.runInThisContext (vm.js:306:38)
    at Object.<anonymous> ([eval]-wrapper:9:26)
    at Module._compile (internal/modules/cjs/loader.js:956:30)
    at evalScript (internal/process/execution.js:80:25)
    at internal/main/eval_string.js:23:3
$ echo $?
1

Which we can now have by providing --unhandled-rejections=strict, but just not by default.

$ node -p "(async () => await new Promise(() => {const {blah} = null}))()" --unhandled-rejections=strict
Promise { <pending> }
[eval]:1
(async () => await new Promise(() => {const {blah} = null}))()
                                            ^

TypeError: Cannot destructure property `blah` of 'undefined' or 'null'.
    at [eval]:1:45
    at new Promise (<anonymous>)
    at [eval]:1:20
    at [eval]:1:61
    at Script.runInThisContext (vm.js:116:20)
    at Object.runInThisContext (vm.js:306:38)
    at Object.<anonymous> ([eval]-wrapper:9:26)
    at Module._compile (internal/modules/cjs/loader.js:956:30)
    at evalScript (internal/process/execution.js:80:25)
    at internal/main/eval_string.js:23:3
$ echo $?
1

That Promises even allow hiding that such errors is truly unfortunate as, at best, it promotes sloppy code, and more commonly can be the cause of significant problems.

This just shouldn't be possible (and neither should the default of only emitting a warning):

$ node -p "(async () => await new Promise(() => {const {blah} = null}))()" --unhandled-rejections=none
Promise { <pending> }
$ echo $?
0

There should be a clear delineation between an explicit rejection/throw of a Promise and code "crashing" as such. Treating syntax, type, reference errors etc as rejections is wrong.

If I want to wrap all of my code in try{}catch(){} I can stupidly do so, but with Promises I'm essentially forced into that by default. It is wrong.

Treating syntax, type, reference errors etc as rejections is wrong.

UNIX philosofy: if everything goes well, say nothing, if something goes bad, crash as soon and noisy as possible. I preffer to have an error crashing my code and forcing me to do something well than having some kind of unexpected behaviour.

If I want to wrap all of my code in try{}catch(){} I can stupidly do so, but with Promises I'm essentially forced into that by default. It is wrong.

No, it's a problem of async functions. A lot of people wrong async/await with Promises, the more you use async functions, the more you hate them.

Treating syntax, type, reference errors etc as rejections is wrong.

UNIX philosofy: if everything goes well, say nothing, if something goes bad, crash as soon and noisy as possible. I preffer to have an error crashing my code and forcing me to do something well than having some kind of unexpected behaviour.

I think we are meaning to agree. Things should crash. It should not be possible by default that such errors exist and the code keeps executing.

If I want to wrap all of my code in try{}catch(){} I can stupidly do so, but with Promises I'm essentially forced into that by default. It is wrong.

No, it's a problem of async functions. A lot of people wrong async/await with Promises, the more you use async functions, the more you hate them.

The same issue exists without using async/await. The examples given could be rewritten without using them.

No, it's a problem of async functions. A lot of people wrong async/await with Promises, the more you use async functions, the more you hate them.

This has not been our experience or our data for what it's worth. Async/await is widely popular with callbacks accounting for over 90% of usage 6 years ago but less than 5% nowadays.

current situation seems fine to me. we have a default of warn so valid programs aren't terminated, and you can use a flag to make it fatal, silent, etc.

For unhandled rejections, having the flag is fine. However syntax, type, reference errors and similar should not be a rejection but always be a fatal error. Outside of a promise those are fatal errors. Just because such broken code is executed inside of a promise should not make it non fatal.

syntax errors won't let your code run. if you want to change the language you should check out https://github.com/tc39/ecma262/blob/master/CONTRIBUTING.md#new-feature-proposals.

@devsnek Node.js is primarily a server-side language and has every reason to be stricter than ecmascript. In a browser (where you're already sandboxed, typically only worrying about the security of a single user, where crashing isn't an option, etc) I can understand why you'd want a reference error like this to only kill the promise chain:

;(async function example () {
  nope
})()

But on a server, I want this to crash loudly.

then you can set this flag: https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode

also, nodejs isn't a language at all, it's a javascript runtime.

also, nodejs isn't a language at all, it's a javascript runtime.

Yes. Rephrased: crashing is the best default for a server-side javascript runtime.

syntax errors won't let your code run. if you want to change the language you should check out https://github.com/tc39/ecma262/blob/master/CONTRIBUTING.md#new-feature-proposals.

Which is exactly the point - until the language can be made less nonsensical, the nodejs runtime needs to crash loudly by default for unhandled promise rejections. A view that the maintainers of nodejs agree with given the warning currently emitted. Its time to move forward with what is stated in the warning.

i don't think it is node's job to patch javascript. we've added flags and runtime apis you can use to bend promises to your will, but node itself is still a js runtime, and i think by default it should conform to js's design.

I don’t think that argument stands well. The Node runtime runs is fundamentally different environment and should have runtime functionality that suits that.

We already diverged on regular errors, the community is used to it, and at least the majority of people I’ve ever worked on node projects with liked that.

(Also I’ve seen even the most experienced node engineers repeatedly make mistakes with promises. I personally feel our continued lack of action on this problem is inexcusable.)

Since we added the warnings by default, I haven't seen a single person complain that they're missing errors (people are only here in this issue because of the enormous warning we print out). The issue is now about production system failure behaviour. I'd argue that's literally the perfect case of using a flag, not changing a default.

Maybe we could make a "strict" mode of node, where everything is "use strict", double equals is a syntax error, promises kill the process, etc.

The issue is now about production system failure behaviour.

Not for me. I want node to crash by default everywhere.

Since we added the warnings by default, I haven't seen a single person complain that they're missing errors

I see complaints and confusion about this literally every week - but I am too tired after a few years of debating this and honestly I am not sure that the strict default is good enough right now to enable.

This is mostly blocked on someone pushing for this. The current default with the warning is infinitely better than what we had a few years ago anyway.

I think we are meaning to agree

I think so, I believe it should crash.

This has not been our experience or our data for what it's worth. Async/await is widely popular with callbacks accounting for over 90% of usage 6 years ago but less than 5% nowadays.

Async functions are obviously better than callbacks, there's no doubt about that, but I find a lot of people make mistakes and have performance problems by misusing them, and using raw Promises is fairly better in more case. This can be just opinion or personal experience, though...

Async functions are obviously better than callbacks,

That is an opinion, I also hold this opinion but it's important to remember that it's that - an opinon :]

I find a lot of people make mistakes and have performance problems by misusing them,

Well, any such case you can list and we can improve on would be appreciated. We definitely can and should make the async experience in Node significantly better :]

That is an opinion, I also hold this opinion but it's important to remember that it's that - an opinon :]

Touchée ;-)

Well, any such case you can list and we can improve on would be appreciated. We definitely can and should make the async experience in Node significantly better :]

The misnomer I've found more BY FAR is that people think that await makes code magically sync, they doesn't understand code it's still async and that await wrap it in a Promise under the hood.

Another problem I've found is people abusing of them preventing code to exec in parallel, for example by using several await fetch() to download several files instead of combine them with Promise.all(). In fact, unluckily I've almost not seen usage of Promise.all() on others code, at least the places I have been working on, but await is being (badly) used everywhere also by newbies, just because they are used to sequential programing.

@piranna

The misnomer I've found more BY FAR is that people think that await makes code magically sync, they doesn't understand code it's still async and that await wrap it in a Promise under the hood.

I think that's a bit of why this is beautiful :] With callbacks, people would have never made that mistake because their code would _look_ async. The fact async/await makes asynchronous code look synchronous and writing asynchronous code as easy as writing synchronous code is part of why it's so appealing in my opinion.

Anything we can do to help with this in core in terms of developer ergonomics?

Another problem I've found is people abusing of them preventing code to exec in parallel, for example by using several await fetch() to download several files instead of combine them with Promise.all()

Sadly - that echos what I've also heard and we have known about these issues since before promises were introduced to Node (here is a question I opened and answered about it in Stack Overflow in 2014 - if you have any creative ideas about how we can help here other than education I'm all ears.

Worse, often people will do things in parallel without Promise.all like here which is one of the reasons I am still interested in eventually landing this or a similar change (terminating on rejections).

I feel like this issue is getting a bit out of track, people have different opinions on what the default should be, and I doubt any consensus will outcome of this conversation. That being said, I still believe we have room for improvement.

Currently, the command node -e "setImmediate(async () => unknownVariable)" exits with code 0 and outputs the following output:

(node:18528) UnhandledPromiseRejectionWarning: ReferenceError: unknownVariable is not defined
    at Immediate.<anonymous> ([eval]:1:26)
    at processImmediate (internal/timers.js:439:21)
(node:18528) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This
error originated either by throwing inside of an async function without a catch
block, or by rejecting a promise which was not handled with .catch(). (rejection
id: 1)
(node:18528) [DEP0018] DeprecationWarning: Unhandled promise rejections are
deprecated. In the future, promise rejections that are not handled will
terminate the Node.js process with a non-zero exit code.

My proposal would be to add information on the CLI flag in the warning message, so devs that want node to crash on async errors know what to do. Something like:

(node:18528) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This
error originated either by throwing inside of an async function without a catch
block, or by rejecting a promise which was not handled with .catch(). If you
want node process to terminate on unhandled promise rejection, use the CLI flag
`--unhandled-rejections=strict`. (rejection id: 1)

That would indicate to the users how to change the default behavior, without disrupting users that rely on the current default behavior.


Looking at this thread, it seems that not everybody likes having to use a CLI flag. I recon that could be handy to have a way to say in the code "I want my async functions to crash the process", a bit like we have "use strict" since ES5; but I agree with @devsnek that's out of the scope of Node.js. If you feel strongly about that, you probably want to push the decorator proposal - maybe I'm just daydreaming, but I imagine we could have something like:

@ThrowOnUnhandledRejection
async function functionName() {
  // in an ideal world, calling this function would crash the process
  unknownVariable;
}

If that's something you are interested in, I'm afraid this repo is not the right place to discuss that; let's try to keep a constructive discussion on how to improve node while sticking to the current ES specs.

@benjamingr

I see complaints and confusion about this literally every week - but I am too tired after a few years of debating this and honestly I am not sure that the strict default is good enough right now to enable.

This is mostly blocked on someone pushing for this. The current default with the warning is infinitely better than what we had a few years ago anyway.

I really want process termination on by default too. For now, I’m writing some boilerplate in every project that exits the process on unhandled rejection instead. Is there a recommended npmjs package which I could just import that does this for me?

Unfortunately, the die-on-unhandled behavior wasn’t available when native Promise support was introduced to nodejs. So this will be a breaking change. And it will also break some “pure JavaScript” code (which doesn’t use any nodejs-specific APIs) which behaves correctly but just doesn’t explicitly handle all faults. It’s not the same thing as changing/removing a core nodejs API. So nodejs turning this into opt-out will mean some codebases managed by people who don’t directly care about nodejs support will have to make QA changes—which I do think is a good thing, just unfortunate. People value being able to continue running unaltered ancient code, and for these people adding a CLI flag (to opt out) might be considered a burdensome code change. We likely won’t hear from some people who care a lot until the behavior is actually changed to be opt-out.

So, can this somehow be put on a roadmap for some known version of nodejs? nodejs 15?

@aduh95
I think it unlikely to work to have decorator/per-function/per-file scoping of this behavior. It gets hard to tell if a thrown error should be considered belonging to the function/file you marked. Also, the decorator is kind of the same thing as having a CLI flag. People who want the default changed want to write clear, concise code and have the system do the “least wrong” thing when something bad happens (assuming you write your code to handle a power outage correctly (e.g., using journals, transactions), a process crash with a non-zero exit code should at least not itself cause data corruption).

Is there a recommended npmjs package which I could just import that does this for me?

You can run node with --unhandled-rejections=strict

For now, I’m writing some boilerplate in every project that exits the process on unhandled rejection instead. Is there a recommended npmjs package which I could just import that does this for me?

@binki make-promises-safe

  1. I agree that many do not understand promises when adopting async/await, and expect that they are now getting sync style coding ability for free when moving off of callbacks, and in fact they don't even want to think about Promises. Without understanding Promises along with that an async function is returning a promise, yes, async/await gets misused. Which is a major reason why unhandled rejections need to terminate the runtime by default. Because many developers don't know better.

  2. A significant out of the box lack with Promises is the inability to wait for all promises to resolve or reject before moving on. With Promise.all, it short circuits on the first rejection. So it is useless if you need to process all results even if one or more reject. Between developers not thinking about Promises and/or needing to deal with all async results, Promise.all goes unused. (Yes, there are ways to deal with this, but again, those that are writing the bad code, obviously are not going to to do so without education) _(Edit: undo my edit, Promise.allSettled didn't show up in node until v12.9 apparently, so I'm glad to learn about it)_

  3. The recommendation in this thread to just use .catch() all over the place is also unfortunate, and it highlights the problem I attempted to convey earlier in the thread. Promise based code puts everything essentially in try{}catch(){} wrappers that hide all of the types of bad code being written, not just legitimate rejections.

  4. --unhandled-rejections=strict should be default, as those that do not know better, need to have the process terminate to assist in highlighting a significant problem. For those that have legitimate use of unhandled rejections, they know better and will know enough to set --unhandled-rejections=warn or none.

It ain't just about bad code. Crashing is the simplest possible way out. That makes for good code.

rejection isn't inherently exceptional though.

@aduh95

My proposal would be to add information on the CLI flag in the warning message, so devs that want node to crash on async errors know what to do. Something like:

I think this is acceptable and feel free to make such a PR and I will review it. If you are not sure how please feel free to mail me (my email is in the project home page).


@devsnek I don't think that this discussion is new or I will convince you but the fact awaiting a rejected promise _literally throws_ - a promise rejection isn't inherently exceptional exactly the same way a thrown error isn't inherently exceptional.


@binki we can raise this again at the next summit. We've already had consensus to terminate on unhandled rejection twice in the past in summits. We can bring this up for a vote next time as well. Maybe @mcollina or @BridgeAR can carry this the extra mile if they have time.

I am honestly exhausted by this particular issue - I am content with the warning in the meantime and just work around it by running with NODE_FLAGS setting strict.

@benjamingr you definitely won't like my opinions about thrown errors :P

In any case, I'm also content with warning.

@benjamingr I think we just need a PR that flips the default and follow the process. We are likely going to need a tsc vote anyway.

@mcollina sure - although the code changes would need to also probably change all our test defaults and the docs it would be a pretty small change and if you make it I can happily review it. I am honestly a bit terrified to make the change but I don't mind doing the actual changes if you make the PR :D

I don't vote - but you know where my vote and probably the majority of the TSC is :]


@devsnek

you definitely won't like my opinions about thrown errors :P

Contrary to what impression some people might have I am not concerned with not liking opinions and I have always welcomed you and your opinion and contributions into the discussion.

This is a hard question where we don't really have a clear answer and having more opinions and discussion - as exhausting as it has been - is definitely not a bad thing especially given the defaults.

And... if you manage to make errors safe in Node servers I will be the first person to embrace the new error model. To be clear: I don't think anyone _likes_ the fact errors terminate Node and we are all painfully aware of the DoS attack it invites if you find an endpoint that does a JSON.parse with an error and terminates your server. Our error handling model is above all a leaky abstraction and technical debt - however it is what it is and it should be consistent between synchronous and asynchronous code. The alternative (inviting leaks and cascading failures) is a lot worse in my opinion.

if you have any creative ideas about how we can help here other than education I'm all ears.

Trace when returned promises are in fact being needed, so it's possible to know when await can be by-passed and exec in parallel.

Trace when returned promises are in fact being needed, so it's possible to know when await can be by-passed and exec in parallel.

Can you elaborate on that? We have async_hooks that allow tracking the dependencies between promises and we can detect when they explicitly wait for each other but state dependencies can be implicit

let name = '';
await p1.then(() => name += 'benjamin');
await p2.then(() => name += ' gruenbaum');

In this case p1 and p2 don't depend on each other but cause side effects that depend on each other in a way that is hard to observe.

@benjamingr, my idea is exactly as you have shown in your example. Problem is as you can shown that there could be some issues when missing sync and async code, mostly due to bandly written code... :-/ Anyway, in your example, await only apply to the Promises returned by the .then() calls at the end of the chain, so all the intermediary Promises (in this case p1 and p2) can be evaluated in parallel. This code could be optimized to be parallel to be something like

let name = ''

await Promise.all([p1, p2])
.then(() => name += 'benjamin')
.then(() => name += ' gruenbaum')

In your code, p2 can resolve before p1, but it doesn't gets evaluated until then. With the new one, it doesn't matter what one finish first. There's an issue, and that's in your code benjamin is set just when p1 is resolved, so a more alike transform would be

let name = ''

await Promise.all([
  p1.then(() => name += 'benjamin'),
  p2
])
.then(() => name += ' gruenbaum')

This way, execution is exactly the same. Anyway, I think the advantage could only be seen in more bigger and complex cases...

The fact the state change was in a then was jus to make it simple to see. In the real world this can be in from inside a regular async function (or mutating a global variable, or state that isn't even in the Node server like state in a database) :]

The fact the state change was in a then was jus to make it simple to see. In the real world this can be in from inside a regular async function (or mutating a global variable, or state that isn't even in the Node server like state in a database) :]

What I wanted to say is that

await a.then(b).then(c)
await d.then(e).then(f)

since Promises are async in nature and can be resolved at any moment and in any order, it's the same than

const g = a.then(b)

await g.then(c)
await d.then(e).then(f)

and so, it can be optimized to run in parallel more-or-less like

await Promise.all([
  a.then(b).then(c),
  d
])
  .then(e)
  .then(f)

Do you want it more complex? Ok

await a.then(b).then(c)
await d.then(e).then(f)
await g.then(h).then(i)
await Promise.all([
  a.then(b).then(c),
  d,
  g
])
  .then(e)
  .then(f)
  .then(h)
  .then(i)

I again just want to add a voice of dissent here. While I adamantly believe that a program should crash in the presence of undefined behavior, the scenario in question is NOT undefined behavior. There is nothing "undefined" about the application state after a promise is rejected. In fact, crashing at the point of garbage collection _introduces undefined behavior_ since garbage collection is completely outside the purview of the program, and the application state at the point of a crash is by definition undefined.

The goal here is to get the attention of the developer, because we know that unhandled promise rejections are often a mistake, especially among new JS developers. A warning is an appropriate strategy for this; crashing is not. Crashing is a really good hammer, but the problem here is not a nail.

Just consider how a promise can be rejected, but retained in memory because it is referenced by another object. Minutes or even hours later, this referencing object can go out of scope and crash the program. This is unfathomably poor design, and if this behavior becomes the default, I predict that the decision will eventually be regarded with distain.

fwiw, crashing on gc is not really on the table at this point.

@devsnek has crashing at a _different_ point been put back on the table? I thought all those other options were thoroughly refuted too?

the flag options are silent, warn, and strict. strict turns unhandled rejections into uncaught exceptions as soon as they reject.

@devsnek that behavior was described as a bug right here in this issue by @benjamingr.

This behavior is _at least as bad_ as crashing on GC. While it's not technically introducing undefined behavior, it is explicitly breaking a feature that drove the design of the promise spec: the ability to attach handlers (callbacks) asynchronously.

This will result in somePromise.catch(() => {}) being written just about anywhere a promise is created. This is not only hideous, but has runtime performance cost and completely defeats node's ability to even detect such a scenario and issue a warning.

This is a terribly misguided proposal IMO.

I would discourage personal attacks. We are dealing with actual problems here. The disagreement is more about how to solve those problems.

@devsnek I've edited my comment to address unfair assumptions about comment authors. Thanks for calling that out.

I think I've quite thoroughly expressed my points in this issue, and I don't really have the time or energy to keep up with this any longer. I'm going to unsubscribe from these notifications and let those in charge use their best judgement.

If the committee wants to hear a differing perspective, I'd be more than happy to share my thoughts – just @ me – but I think I've probably exhausted my usefulness here.

This will result in somePromise.catch(() => {}) being written just about anywhere [..]

I don't think it will. A common approach is: let an error bubble up to a caller that knows how to handle it. If there isn't one, crash. This approach works for callbacks, error events, and should work the same for promises.

I want to note, for servers that should not be the end of the story. From there, pm2, systemd, docker, k8s, ec2 or - some form of orchestration - either restarts the server or launches a replacement. Crashing alone isn't the holy grail, it fits a larger vision where servers are cattle and [going off topic].

fwiw, crashing on gc is not really on the table at this point.

It (and everything else) is on the table in my opinion. I am not for it - but the fact I am not for it does not mean it is not on the table any more :]

the scenario in question is NOT undefined behavior.

@mike-marcacci the undefined part is Node during cleanup - the reason Node crashes on uncaught exceptions is that the Node codebase does not clean up _internally_ in some cases. This is also true for unhandled rejections. By continuing on unhandled rejections that originate from uncaught exceptions core throws you are introducing undefined behavior in the form of a resource leak that can lead to a cascading failure.

See my comment here.

+1 on the cli option:

  • for most of the server-side apps it really makes sense to fail (and get restarted by k8s or pm2 or whatever to a consistent state)
  • for some of the apps (using some external lib for scraping) it may really make sense to keep running (and show a warn)
  • in some special (but legitimate) cases it's even desired to keep running (running external code)
  • for some of the apps (using some external lib for scraping) it may really make sense to keep running (and show a warn)

Why do you think it makes sense? If the library crash, it's a bug in the library, or in your code by not capturing it. I think in that case they should crash too.

  • in some special (but legitimate) cases it's even desired to keep running (running external code)

I could agree on that to add a flag to enable the current behaviour of showing just a warning, but in my opinion current default behaviour should be deprecated and make it crash ASAP.

Simply put, life is not always sunshine and rainbows.

Moreover, node is not just a toy anymore, it's now a trusted platform and it should make pragmatic choices. You don't want to get into a situation where something is not possible just because somebody had different opinion than what you actually need.

About defaults - I don't care as long as it's possible to change it via CLI/ENV flag.

Simply put, life is not always sunshine and rainbows.

Moreover, node is not just a toy anymore, it's now a trusted platform and it should make pragmatic choices. You don't want to get into a situation where something is not possible just because somebody had different opinion than what you actually need.

Swallow errors instead of notify them making as much noise and giving as much info as possible, I find it more like a problem...

About defaults - I don't care as long as it's possible to change it via CLI/ENV flag.

Agree on that, just only wants the default to be the "secure" one, it's said do the crash.

I think the example at https://twitter.com/isntitvacant/status/1199039107084611589 is relevant here:

oh! 1 last thing: in rare cases you may have a promise you can't handle immediately – leaving room for a temporarily unhandled rejection! All is not lost: you can mark it "temporarily unhandled" like so:

p = doSomething()
p.catch(() => {}) // temporary err handler
...
await p

With that, we can safely crash the app on unhandled rejected promises, if you want to handler them later you just only need to add explicitly a no-op handler.

I am a little late to the discussion, but can someone explain me how a garbage-collected rejected Promise is even an error or indication of a bug??? If the Promise is gc'd, by definition, the code does not care about it anymore, and thus there should be no difference whether it is resolved or rejected.
I have a real use-case where crashing the process on unhandled rejection just does not make any sense, and would be a real pain in the ass.
I am using a 3rd party package: an API call, that I have no control over, which spits out a structure that may (or may not) contain a bunch of Promises, some of which would resolve, and some of which would be rejected.
I am only interested in one of those promises, and I properly attach handles to that one. But now, in order to avoid the 'unhandled rejection' festival, do I have to deep search the returned structure (that may potentially contain millions of elements) and attach catch handles to all Promises I find ???

let resultOfAnAPIcall = {
    interestingStuff: Promise.resolve('interesting'),
    boringStuff: [...Array(1000000)].map(_ => Promise.reject(1))
}
resultOfAnAPIcall.interestingStuff
    .then(msg => console.log(msg))
    .catch(() => console.log('oops'));

I am using a 3rd party package: an API call, that I have no control over, which spits out a structure that may (or may not) contain a bunch of Promises, some of which would resolve, and some of which would be rejected.
I am only interested in one of those promises, and I properly attach handles to that one. But now, in order to avoid the 'unhandled rejection' festival, do I have to deep search the returned structure (that may potentially contain millions of elements) and attach catch handles to all Promises I find ???

let resultOfAnAPIcall = {
    interestingStuff: Promise.resolve('interesting'),
    boringStuff: [...Array(1000000)].map(_ => Promise.reject(1))
}

To me, this is a problem of the API having a bad architecture design, and you would not be the only one having that problem. In fact, it's a waste of resources (both yours and the company) if it really works that way. In any case, you can also use Promise.allSettled() to wait for all of them to finish without worrying about their statuses...

Promise.allSettled(...) is not different from just Promise.all(...).catch(()=>{}) here. The problem is I do not necessarily even know the structure of resultOfAnAPIcall in advance, so to collect all boringStuff in one iterable in order to Promise.all() over it, I need to deeply traverse it collecting all Promises I encounter, perhaps, even relying on hacks like if (typeof value.then === 'function'). It is not impossible, it is just very conceptually wrong and computationally inefficient!

And this is not a result of a bad API design. Lots of valid use-cases have already been mentioned by @mike-marcacci.

I understand the intention of this proposal to crash on beginner errors, however, the fact that putting your code inside an async function de facto wraps it with try-catch is a feature of the language.
The fact that undefined[0] and throw new TypeError('oops') produce the same Exception type is also a (definitely unfortunate) feature of Javascript.

And no amount of hand holding would prevent beginner bugs anyway :)

I spent a bit of time to come up with a real life use-case which wouldn't look as contrived as my previous example.
Imagine you have a swarm of a million of small low-cost low-energy sensors which are deployed over some terrain. They are designed to sleep most of the time to conserve energy, and only listen to a global radio "ping" signal. Once the signal received, they wake up and reply via another (secure) channel with their status and ids of other sensors deployed in the vicinity.
Now, the device that's sending the "ping" is actually a drone, running NodeJS, that relies on sensor responses to navigate.
It is not unreasonable to assume that the swarm vendor would provide you with API that, after sending a "ping" signal, would return a structure with a million Promises, each corresponding to an individual sensor reply.
Most of these Promises are obviously expected to reject if the corresponding sensor did not respond to (or even received) the "ping".
The drone, however, needs to make real-time decisions on its flight trajectory, and is only interested in responses from sensors in its vicinity (ids known from responses to the previous "ping").

So here we go. I'm flying over California, and my software crashes because I haven't received a response from a sensor in Wyoming. And I physically don't have enough extra CPU cycles to go and attach a million no-op handlers to Promises whose resolution I don't care about after each "ping" that I'm sending every 10 seconds or so...

What do I do?

looks at javascript. looks at above use case. looks at javascript. walks away mind blown.

It is not unreasonable to assume that the swarm vendor would provide you with API that, after sending a "ping" signal, would return a structure with a million Promises, each corresponding to an individual sensor reply.
Most of these Promises are obviously expected to reject if the corresponding sensor did not respond to (or even received) the "ping".
The drone, however, needs to make real-time decisions on its flight trajectory, and is only interested in responses from sensors in its vicinity (ids known from responses to the previous "ping").

Network timeouts are usually in the two minutes range, too much for a real time system like flight navigation (and in fact totally useless over a network and would use radio emissions directly from the drone, but that's offtopic). What I would do instead is send the unicast ping (that's ok) and wait for answers from the beacons in an async way, worrying only from the ones that gave me an answer and their delay, to discard the far away ones. For navigation you just only need one more beacon than navigation dimensions: 2 beacons for 1D (for example trains), 3 beacons for 2D, or 4 beacons for 3D (like GPS), just only you can use more to improve precision.

Definitely a bad API, when working with so much data it's better to use an async API, both streaming or events.

@piranna My point is I only design one part of the system, which is the drone. How the swarm is designed is completely off topic, since I don't have control over it anyway. The API the swarm vendor has provided is perhaps not ideal, but definitely not unreasonable, so I designed the whole example just to illustrate the point that there are reasonable situations where a Promise in a rejected state does not represent an exceptional situation worthy of a crash, while forcing the end user of the API to attach no-op handlers can be costly and sometimes even prohibitive.

@dmaevsky in that case, you can start Node.js with --unhandled-rejections=none and unhandled rejections won't crash the process.

@dmaevsky in that case, you can start Node.js with --unhandled-rejections=none and unhandled rejections won't crash the process.

I agree with @targos, this seems to be a special use case that deviates from what regular developers would expect, a flag to opt-in on this special mode is the correct way.

@targos now imagine that the example I described has already been deployed. Currently Node only warns about unhandled rejections and the drone flies with no issues as designed.
Now if we start terminating the process as a default, the drone developers might not even notice it for a long long time since most of the Promises we are ignoring will be garbage collected in the pending state, before even having the chance to reject. So all post-upgrade tests would pass.
And one day one of those actually rejects before having been gc'd the drone will fall from the skies, possibly injuring someone on the ground.

Sorry for being overdramatic here. My point is that by changing the default behavior you will potentially break a lot of well designed and well tested applications in possibly very subtle and hard to detect ways...

That can be true for many changes, even non-breaking ones that introduce bugs undetected by Node.js' own tests. If the default is changed, it will be done in a major release and well documented in the release notes.

@targos This is a valid argument.
Well, I just really hope the true reason behind the push for the change is not the fact that Node just does not clean up its internals properly upon an unhandled rejection, as was mentioned in one of the comments above.
Personally I still feel that changing the default would just introduce new possibilities for the Node servers to crash in subtle and hard to detect ways, but I can of course live with a flag...

@dmaevsky this PR https://github.com/nodejs/node/pull/27867 goes into that direction, i.e. make Node clean up his internals in case of errors.

I think I've figured out a pretty good compromise solution in PR #33021. I invite even the most cantankerous among you to review it.

It defines a new default mode for --unhandled-rejections that throws exceptions on unhandled rejections by default, but allows userland code to set an unhandledRejections hook to override that behavior, so you don't have to mess around with CLI flags if you don't want to.

I have an idea: trigger a BSOD on unhandled promise rejection... just to teach users

TSC will be voting on the intended default behavior for unhandled rejections on v15, where we also intend to remove the deprecation warning. We want to listen to Node.js users experiences with unhandled promises and what you think should be the default before voting, so we prepared a survey: https://www.surveymonkey.com/r/FTJM7YD

We also wrote an accompanying blog post for extra context: https://medium.com/@nodejs/node-js-promise-reject-use-case-survey-98e3328340c9

The survey will run for at least two weeks, at which point we'll evaluate if the number of replies is enough for us to move forward, otherwise we might extend it for a week or two. Please fill out the survey as it will help us decide the future of unhandled promise rejections on Node.js!

@mmarchini I'm not sure if you have the power to fix this, but the Medium post has weird line breaks (presumably from being copied and pasted from plain text).

image

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mcollina picture mcollina  ·  3Comments

stevenvachon picture stevenvachon  ·  3Comments

jmichae3 picture jmichae3  ·  3Comments

srl295 picture srl295  ·  3Comments

dfahlander picture dfahlander  ·  3Comments