I've been trying to reproduce an issue where the server process appears to be orphaned and continues running despite being killed.
I'm using tsc-watch to watch for code changes and when a file change is detected it sends SIGTERM to the process and then recreates it by invoking node ./build again.
However in this scenario some simple code in my application is unexpectedly causing the process to be orphaned and never exit as multiple processes are created all vying for the same port.
apollo-server: 2.13.0
The server stops when await server.stop() is called
The server does not stop and the call to server never finishes, the process never exits.
https://github.com/justinmchase/apollo-lerna-repro
npm i
npm start
# open http://localhost:3000 in browser
This will cause a simple react page to continually call the api. If you open this file:
https://github.com/justinmchase/apollo-lerna-repro/blob/master/packages/api/src/index.ts
And simply save it you will see the server restart seamlessly.
Now go and uncomment this line:
https://github.com/justinmchase/apollo-lerna-repro/blob/master/packages/api/src/index.ts#L61
And observe that the call to await server.stop() does not stop the server and never returns and the process is never exited.
Correction, I'm seeing it take 30s-2m to complete the call to stop. I wasn't patient enough but it does seem to eventually stop.
That is still surprisingly long and I feel like there is something wrong but it does eventually stop. I don't mind it actually waiting for current connections to complete but it does appear to keep accepting new connections which seems bad.
EDIT: Correction to the correction, sometimes it stops sometimes even after 5m its still running.
Same behavior here - just for the records
After testing this extensively I think that all the complexity of this sample can be ignored and simply starting the server and calling await server.stop() is the only problem. The server does not alway stop and the time it can take to stop is highly variable and it can actually continue to accept and process new incoming connections while hung in the stop method.
If I simply close the process then the server does stop (of course) but it will abruptly end all ongoing requests mid stream.
What I expect to happen is for the server to immediately stop accepting incoming connections and once all ongoing connections are completed then the call to stop resolves. If there are no currently processing requests then it would end immediately.
I can reproduce the error too - just for the record
May be related to https://github.com/nodejs/node/issues/34830
May be related to nodejs/node#34830
I have tests for an Apollo server that exit cleanly when run on the host (macOS) but fail to exit in Docker container, I've resolved the issue down to just being about the server starting and not exiting on Docker, so that maybe supports the relation.
Running into the same issue. Any good workarounds?
I've got the same behavior on my repo. Is there someone from the team aware of this issue?
No known workarounds or traction from the team.
Well, the only known workaround is to just not try to stop the server and kill the entire process. That will work, of course any sockets connected will be abruptly terminated as well.
I have the same problem in testing, any updates?
I think you have to go and 👍 the main issue and once enough people interact with the issue it will get noticed by the team. As it is they have 469 open issues so I'm assuming they're not seeing these items or using this database actively.
Still running into this issue, noticed it when using ts-node-dev.
[watch:server ] [DEBUG] 16:34:52 Removing all watchers from files
[watch:server ] [DEBUG] 16:34:52 Child is still running, restart upon exit
[watch:server ] [DEBUG] 16:34:52 Disconnecting from child
[watch:server ] [DEBUG] 16:34:52 Sending SIGTERM kill to child pid 31733
[watch:server ] [DEBUG] 16:34:59 Child exited with code 0
[watch:server ] [DEBUG] 16:34:59 Starting child process -r /tmp/ts-node-dev-hook-7320061111056044.js /home/chance/test/api/node_modules/ts-node-dev/lib/wrap.js src/index.ts
[watch:server ] [DEBUG] 16:34:59 /home/chance/test/api/src/index.ts added to watcher
[watch:server ] [DEBUG] 16:34:59 /home/chance/test/api/src/schema.ts added to watcher
[watch:server ] 🚀 Server ready at http://localhost:4000/
It sometimes immediately restarts, and sometimes there is (like above) a random delay between sending SIGTERM and it actually restarting.
This jest (typescript) integration test reproduces the issue
The apollo server (with express, or maybe it is not relevant) doesn't stop properly and jest is hanging.
import { ApolloServer, gql } from 'apollo-server-express'
import express from 'express'
const typeDefs = gql`
type Health {
ok: Boolean!
version: String!
}
type Query {
health: Health!
}
`
const resolvers = {
Query: {
health: (_root: unknown, _args: unknown) => {
return {
ok: true,
version: process.env['npm_package_version']
}
}
}
}
let server: ApolloServer
beforeAll(async () => {
server = new ApolloServer({
typeDefs,
resolvers
})
const app = express()
server.applyMiddleware({ app })
await app.listen({ port: 4000 })
}, 20000)
afterAll(async () => {
// FIXME: The jest process is not existing naturally. Something in apollo-server is holding it up
await server.stop()
}, 20000)
describe('test server', () => {
it('should be healthy', async () => {
// TODO: Execute GraphQL query on the server
}, 5000)
})
output:
[1] + done clear
PASS tests/integration/service2.test.ts
test server
✓ should be healthy
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 1.147 s, estimated 5 s
Ran all test suites matching /\/Users\/SOME_FOLDER/tests\/integration\/service2.test.ts/i with tests matching "test server".
All tests passed
tests/integration/service2.test.ts
test server
✓ should be healthy
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't stopped in your tests. Consider running Jest with `--detectOpenHandles` to troubleshoot this issue.
The problem is not with jest, removing apollo solves the jest hanging problem
@bigman73 ApolloServer.stop() only stops the lifecycle of code that's specifically to the apollo/graphql server itself: eg, any plugins, signal handlers, etc. But the way apollo-server-express works is that it installs middleware on your separately-created Express app and you tell express to listen with app.listen. The connection between the ApolloServer object and the Express app is pretty minimal: AS just adds some middleware (request handlers). It's still your job to stop the HTTP server started by app.listen.
I think this is kind of confusing! One problem is that ApolloServer doesn't have a start() method that corresponds to the stop() method. This leads to other issues too; for example, some errors that can occur asynchronously during startup (eg, errors from plugins or from loading a federated schema) aren't easily handled by your program. I am planning to very soon introduce a (optional for now) start API, so basic Express usage would look like:
const apolloServer = new ApolloServer({
typeDefs,
resolvers
});
const app = express();
apolloServer.applyMiddleware({ app });
await apolloServer.start();
const httpServer = app.listen({ port: 4000 })
// later
await new Promise(resolve => httpServer.close(resolve)); // ... though look a few comments down for some caveats!
await apolloServer.stop();
I think adding a start() method (which does not start the HTTP server!) will make it more clear that the stop() method does not stop the HTTP server.
(Right now, the equivalent of start() gets called automatically from the ApolloServer constructor and there's no easy way to await its successful result and handle any errors.)
My previous comment is specific to apollo-server-express as mentioned by @bigman73. On the other hand, apollo-server (the batteries-included not-super-configurable built-in-Express ApolloServer)'s stop method is supposed to stop the HTTP server! That I will look into today, since it relates to some other stop-related work I'm doing.
Thanks @glasser for the detailed explanation
I followed your idea of stopping http server and it works. Please note that is seems (http) Server class has no stop() method, it does have a close() method
This revised code doesn't hang jest:
let server: ApolloServer
let httpServer: HttpServer
beforeAll(async () => {
server = new ApolloServer({
typeDefs,
resolvers
})
const app = express()
server.applyMiddleware({ app })
httpServer = await app.listen({ port: 4000 })
}, 20000)
afterAll(async () => {
httpServer.close()
await server.stop()
}, 20000)
I tried to reproduce with the original lerna reproduction, which uses apollo-server.
With apollo-server, ApolloServer.stop() first calls close() on the http.Server and waits for its callback to be invoked. We're talking the Node core http.Server close method here — actually, it's the net.Server close method.
What does this method do? It stops the server from accepting new connections and waits to invoke its callback until all existing connections are done. But it does nothing to proactively ensure existing connections will ever finish.
So yes, require('apollo-server').ApolloServer.stop() works exactly like Node's net.Server.close: it waits until all connections naturally die.
On the one hand, this is the most "generic" approach — it doesn't enforce the policy of breaking existing connections. On the other hand, it's not super easy to use! It looks like there are a bunch of npm packages out there that try to fix this. http-terminator looks pretty compelling to me. (Its README links to four others; notably, stoppable has more npm downloads, but this comment by the http-terminator author on stoppable seems believable.)
As an immediate workaround: I think people running into this problem should switch from apollo-server to apollo-server-express and install something like http-terminator themselves. This isn't really an Apollo Server problem — it's an http.Server problem.
More broadly: I do think that apollo-server's "out of the box" experience should work better here. Even though it's just an http.Server problem, apollo-server hides the http.Server from you, and so if you choose to use that particular package, you can't work around this issue. I think it probably does makes sense that the "batteries included" apollo-server's server.stop() should use something like http-terminator with a sensibly chosen gracefulTerminationTimeout (10 seconds?); if you want more control (like a different timeout or not installing http-terminator at all), then you're welcome to switch to apollo-server-express and manage the http.Server yourself.
@bigman73 Thanks for the correction on the function name. I updated my previous comment.
Note that with your code, when httpServer.close returns, your HTTP server, won't answer new incoming connections, but your express server can still try to process requests on existing connections. If any of these rely on the assumption that your Apollo Server hasn't been stopped (eg, if you expect those final requests to be reflected in usage reports) then that assumption won't be true! You'll want to use something like http-terminator as suggested in my previous comment for this.
(After a bit more research, I think I like stoppable a bit better than http-terminator due to https://github.com/gajus/http-terminator/issues/22 and https://github.com/gajus/http-terminator/issues/16.)
@glasser I understand the risk, but in automatic integration testing (e.g. using Jest) the chances of someone unexpectedly using apollo-server are zero. The server is created on the fly and needs to die on the fly.
Yes, i do think the experience could be better.
Even with apollo-server-express I have my own helper function createGraphQLServer that takes care of the entire stack (express, optional voyager, etc.)
The stopGraphQLServer function takes care of the tear down and now includes httpServer.close()
I think a future version should provide this sugar coated layer which will make it safe for users and easier to integrate with apollo-server, using one line for creation and one line for destruction.
Check out #4908.
Released in v2.20.0.
Most helpful comment
After testing this extensively I think that all the complexity of this sample can be ignored and simply starting the server and calling
await server.stop()is the only problem. The server does not alway stop and the time it can take to stop is highly variable and it can actually continue to accept and process new incoming connections while hung in the stop method.If I simply close the process then the server does stop (of course) but it will abruptly end all ongoing requests mid stream.
What I expect to happen is for the server to immediately stop accepting incoming connections and once all ongoing connections are completed then the call to stop resolves. If there are no currently processing requests then it would end immediately.