Consider the following script:
const cluster = require('cluster');
const childProcess = require('child_process');
const useCluster = false;
const isMaster = useCluster ? cluster.isMaster : !process.argv.includes('worker');
if (isMaster) {
if (useCluster) {
cluster.fork(__filename);
} else {
childProcess.fork(__filename, ['worker']);
}
} else {
setTimeout(() => console.log('hi from worker'), 1000);
process.disconnect();
}
When useCluster is true node exits immediately and nothing is printed to the console. When useCluster is false you see "hi from worker" logged after 1s and then node will exit which is the expected behavior. The documentation makes it sound like calling process.disconnect should only close the IPC channel between the master and worker process and should not cause either to exit early if there is still work to do.
Sorry, I ran out of time to look at this, but what I suspect is happening is that the master does not expect to see the IPC pipe closed directly, without the worker sending a message saying that it is going to do so, considers this to be unexpected, and kills the worker. I had trouble proving that, though, more research would be required.
Note that using process.worker.disconnect() instead of process.disconnect() gives the behaviour you expect. If you just want a workaround, doing (process.worker ? process.worker.disconnect : process.disconnect)() would allow more symetricality between cluster workers and forked child processes.
@sam-github process.worker is undefined for me when using both cluster and child_process. I tried this with node 10.15.3 as well as 12.2.0 on macOS mojave if that makes a difference. I've just switched to using child_process for now in my code since we were really only using the cluster module for its automatic debug port incrementing
Apologies, I meant cluster.worker.disconnect.
The cluster modules is not very flexible, if its not being used for exactly the intended use-case (identical net or http servers), its features easily become misfeatures, so using child_process is probably a better idea for you.
Just ran into a variation of this issue, in our case the master process exits and we want to perform some proper graceful shutdown on the child processes. Happy to contribute, but want to validate that I have the correct approach first. I believe the root cause of both issues (master process going away and process.disconnect) is in this listener added during cluster's worker setup.
The approach that I think would solve this issue would be to add an option (similar to exitedOnDisconnect) that would control whether the child process should exit on disconnect. Can someone please chime in on whether this is the right approach?
Is anyone working on this I would be interested in looking into it. I may need some guidance but I am willing to give it a shot.
Most helpful comment
Sorry, I ran out of time to look at this, but what I suspect is happening is that the master does not expect to see the IPC pipe closed directly, without the worker sending a message saying that it is going to do so, considers this to be unexpected, and kills the worker. I had trouble proving that, though, more research would be required.
Note that using
process.worker.disconnect()instead ofprocess.disconnect()gives the behaviour you expect. If you just want a workaround, doing(process.worker ? process.worker.disconnect : process.disconnect)()would allow more symetricality between cluster workers and forked child processes.