Since updating to 0.9, our ghost instances are serving a lot more traffic:

Is this the expected behavior?
Hey @gergelyke, welcome to our little space on Github 馃憢 . This is a really interesting observation. Could you please provide more information about what traffic you are measuring?
The Response Time graph indicates to me that most of the requests got faster (median) and slow requests got slower (95th). This leads me to the question of how the throughput is calculated. If the requests got faster you are able to transmit more data per second and therefore throughput goes up?.
Is there a chance that you switched to a newer node version during the upgrade?
Hello @sebgie,
it is measured by Trace, our Node.js monitoring platform. What you see there in the green is the number of HTTP requests with 2xx status codes, black ones are 3xx. The blog served around 30rpm, now it jumped to 60+, while the user count did not grow.
Node.js was not updated, we are using the docker image: risingstack/alpine:3.3-v4.3.2-3.1.1 (with Node.js 4.3.2)
Hello @sebgie,
what's even worse, is the memory usage and eventloop lag I just saw:


All changed in a very negative way. Any idea why it could happen?
Hey, yeah, this all looks very concerning. I have not yet seen a problem on Ghost(Pro) which is running on 4.4.7 though. Would you mind swinging by our Slack community (https://ghost.org/slack/) to have a discussion about how to replicate and measure what you see?
The change log from 0.8 to 0.9 is quite long (https://gist.github.com/ErisDS/9adeb395d3e8caec192622270600c8b1) so it is possible that a problem was introduced.
A 2x increase in requests seems still very strange and could hint to a problem with the theme? Are you using a custom/3rd party theme by any chance?
We are using https://github.com/RisingStack/StayPuft as the base of our theme - but nothing chaged there
@sebgie just updated to 4.4.7 - will see what happens
the same happens with 4.4.7
The 0.8->0.9 changelog is here, the link above was the 0.7.x->0.8 changelog https://gist.github.com/kevinansfield/ac440cd1433f33df0bc358b786608812
A 2x increase in requests suggests that there may be a change that is causing a redirect, e.g. from non-trailing-slash to trailing-slash (which may also explain the drop in median, redirects should be quick!). Are you able to get any request logs from your ghost instances?
Wooow, that's a lot of change - any chance we can do releases more often?
Checking the request logs in the meantime
I'm also seeing continuously growing memory usage since upgrading to 0.9:

I can confirm that Ghost 0.9 has introduced memory leaks. That's frustrating.
Ok, so if it not just us, I will dive deeper tomorrow!
@gergelyke Thank you!
What i can tell you so far: I've tested if scheduling (https://github.com/TryGhost/Ghost/blob/master/core/server/scheduling/SchedulingDefault.js) could cause memory leaks, but found nothing. The memory always jumps back to around 90mb. Before we have released scheduling, i've inserted thousand of jobs and watched the memory - it always jumped back.
We've noticed a doubled memory usage since upgrading to 0.9. Went up from around 120 MB to 250 MB, which is kind of a jump. Updating Node.js from 4.4.2 to 4.4.7 didn't change anything.
@gergelyke, @wezm, @msamoylov, @marcuspoehls Please please please, can you provide enough information for us to reproduce or find a pattern to what's causing the issue:
@ErisDS
@ErisDS
14.04.5) running on Digital Ocean4.4.73.10.6I've had a look at the current memory consumption and Ghost jumped back to "normal" usage. Didn't check the number of requests and no idea why it peaked in the last days
@ErisDS
@ErisDS
@ErisDS
Noticing this also. TTFB appears to have increased after updating from 0.8 to 0.9 as well (by ~80-100ms). I upgraded node at the same time though, so perhaps that's the culprit.
After a few attempts to reproduce I think that I have found how to track down what is going on here. An idle blog has no leak and it hovers between 80 and 120 mb of memory depending on your theme.
BUT if you add some traffic the whole situation looks quite different. Below graphs are from an internal blog that I put some load on (5 concurrent users, every 100 seconds) and it grew from around 70mb to 260mb within 24hrs. Notably the memory didn鈥檛 grow anymore when the requests stopped.


I only sent random requests to the frontend and /ghost routes of the blog (GET /...).
It should now be possible to recreate this behaviour locally with a local dev blog and a curl command in a while loop.
while true; do curl -I http://localhost; sleep 30; done
So far it does not make a lot of sense what I am seeing in the retainers - you can spot quickly, that arrays are growing, but I can only see lodash as a retainer. All input is welcomed!

Hey, we have found the reason for the memory leak.
We have updated bookshelf and knex in our last Ghost release 0.9.0. I've tested a couple of hours in the morning and with the PR i've created, the memory usage should be back to normal.
Still, we will investigate further why the update of bookshelf and knex has caused a memory leak. Sorry you had trouble!
Wow, thanks a lot @kirrg001!
So basically you downgraded or upgraded bookshelf to solve the issue?
@gergelyke yeah exactly, for now we downgraded bookshelf and knex back to the versions we have used before 0.9.0, see https://github.com/TryGhost/Ghost/issues/7291
We've shipped a temporary solution for this in 0.10.1.
I can verify that the memory leaks stopped!

Same here. Good job!
Most helpful comment
I can verify that the memory leaks stopped!