Parse-server: Logger - running out of disk space.

Created on 19 Jan 2017  路  6Comments  路  Source: parse-community/parse-server

I kept running out of space on my AWS EC2 instance until I figured out that our code deployer was setting VERBOSE to 1. This kept crashing our server almost daily with ENOSPC, until it was redeployed. Now that I've disabled verbose, and set logLevel to "error", I still think that we'll hit ENOSPC eventually. Is there a way to remove older logs?

To be honest, I want to completely disabled logging until I'm sure about this issue no longer being a problem. But I haven't figured out a simple way yet.

Most helpful comment

For those interested I'm also using AWS. I'm deleting my logs files using cronjobs. First I SSH into my parse-server instance and run the following commands.

Run cron on your box
$ crontab -e

Copy and Paste the following line
55 2 * * * sudo /usr/bin/find /var/app/current/logs/ -type f -mtime +2 -delete;

Replace /var/app/current/logs/ with the location of your logs. The command runs at 2:55AM and only keeps the last 3 days of logs.

All 6 comments

With the default logging it seems like you could simply clean-up the ./logs folder yourself with a cronjob on the server. You could either delete log files, compress older ones or a mix of both. I didn't find a documented config for the Winston logger that would take care of it, but maybe someone else can help on that front https://github.com/winstonjs/winston#usage

For those interested I'm also using AWS. I'm deleting my logs files using cronjobs. First I SSH into my parse-server instance and run the following commands.

Run cron on your box
$ crontab -e

Copy and Paste the following line
55 2 * * * sudo /usr/bin/find /var/app/current/logs/ -type f -mtime +2 -delete;

Replace /var/app/current/logs/ with the location of your logs. The command runs at 2:55AM and only keeps the last 3 days of logs.

Also, if you're running in docker, and logging to stdout as well as log files, you might need to delete the docker log file for the container. Easiest way to clear it is like this:

echo '' | sudo tee  "/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log"

If you are using AWS you can create a config file that will install cornjobs when you create an instance.

.ebextensions/cron.config

files:
  "/etc/cron.d/parse-log-cleanup":
      mode: "000644"
      owner: root
      group: root
      content: |
        55 2 * * * root /usr/bin/find /var/app/current/logs/ -type f -mtime +2 -delete;

commands:
  remove_old_cron:
    command: "rm -f /etc/cron.d/*.bak"

Notice unlike my last comment this uses root instead of sudo

Hi guys. I'm having trouble with a file logs/parse.log that's over 14 GB. I'm running Parse Server on a Bitnami deployment on 1&1. Is there any way to make this file split by day? Or perhaps it's a Bitnami-specific file? (I've beein asking in their community but not resolved yet)

my $.02 would be to configure logrotate to process your parse log files. this is done by adding a file in /etc/logrotate.d. there's tons of articles on how to configure logrotate with examples.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

LtrDan picture LtrDan  路  4Comments

jiawenzhang picture jiawenzhang  路  4Comments

dcdspace picture dcdspace  路  3Comments

ViolentCrumble picture ViolentCrumble  路  3Comments

omyen picture omyen  路  3Comments