I am running an RDS MySQL server on a t2.micro. This instance allows 65 concurrent connections. Running lambda functions using invoke local, or actually deploying them, the connections close immediately after running. Using sls-offline, they stay open. Each function invocation will open a new connection, and these connections are never closed.
Here is my RDS monitor - the large ramp ups are my uses of serverless ofline. The sharp drops are every time I stop serverless-offline with Ctrl-c. The relevant insight from this graph is that the # of connections never goes down until sls-offline is exited.

hey @PiedPieper
which version of serverless-offline are using? and with which flags? with which language? where are you opening your sql connection? do you have some sample handler code focusing on the sql connection?
generally speaking, this plugin is not responsible for your code, it essentially just runs it. that being said, there might be some issues with certain patterns regarding db connections and pooling, e.g. I think I remember seeing some issues with require.cache reloading on file change causing problems like this.
like I said, it would be good to get to the root of this once and for all if you could provide some sample handler code - or just some pseudo code, I just need to know where and how you are opening your db connections.
Version 5.10.1, no flags, node.js. Opening the SQL connection using knex:
const knex = Knex({
client: "mysql",
connection: {
host: Envvar.string("AWS_MYSQL_HOST"),
user: Envvar.string("AWS_MYSQL_USERNAME"),
password: Envvar.string("AWS_MYSQL_PASSWORD"),
database: Envvar.string("AWS_MYSQL_DBNAME")
},
pool: { min: 1, max: 1 }
});
thank you! one more question: are you running the above inside of the handler function or outside (in module scope)?
I forgot, in addition: does the pool deplete with just hitting the endpoint (without changing the handler file), or only when you change the handler file, hit the endpoint, change the file, hit the endpoint and so on?
Im running it outside, in module scope. I import the connection into my handler.
It depletes with just hitting the endpoint, without changing the handler file
thanks @PiedPieper I'll have a look and try to reproduce. in the meanwhile, could you try the latest v6 alpha to see if it fixes your issue?
@PiedPieper
I found the culprit, although I couldn't quite reproduce it completely because when hammering an endpoint, A LOT of connections have been established, BUT (in my case) also eventually being released.
I also used pg with Postgres as opposed to knex with mysql. it seems that in v5.10 cache invalidation mechanism causes this. every request is essentially destroying the handler module cache and reloads it. that causes the pool to be destroyed and it had to be re-established with each request.
for now, use --skipCacheInvalidation (set as cli flag or in your serverless.yml). when I used this flag, I saw only 1 open connection.
Now, when I tried the same in v6 alpha, I also saw only 1 connection open. Reason is, that we are keeping the handler running now, similar to what Lambda does (and the cache invalidator is currently also not being used I believe, although I have to double check as v6 is still a work in progress).
my plan is to remove --skipCacheInvalidation in v6 and make it the default setting, as it caused endless problems for people not being aware of this behavior, me included when I started using this plugin.
could you let me know if --skipCacheInvalidation works for you? or better, try v6 alpha, which should definitely have a closer AWS emulation than what's currently in v5.
@dnalborczyk --skipCacheInvalidation works for me !
Most helpful comment
@dnalborczyk --skipCacheInvalidation works for me !