This is partially related to #1578.
I'm looking at backing up my mailcow using duplicati to backblaze b2. I'm using duplicati so I can get incremental backups and compression instead of gzipping my entire email archive all the time and paying for the extra storage. Plus, duplicati is pretty lightweight and easy for non-techies to administer in my absence (nice GUI). In my tests, restores work properly and everything is all good.
The issue: I am taking docker down (docker-compose down OR stop) so that the state is consistent when duplicati is doing its backups. I dump SQL and save redis (redis-cli save) and then backup the dump file on from the host and the redis and entire vmail volumes (actual directories on the host). I'm running on a small VM (2GB RAM). The act of stopping docker and restarting it when I'm done backing up seems to overwhelm the system. It eats the RAM and locks up, basically going into a 'failed' state or just taking FOREVER to restart. I have a few containers that enter 137 (out of memory) and have to be re-issued a 'docker-compose up -d'. In one test, the whole thing got corrupted after a VM 'fail' and I had to reinstall and restore data.
Why does restarting docker kill available memory? Am I doing this wrong? It has NO problem just running, it's been working continuously on this VM for months without issue. I've tried docker-compose down and docker-compose stop to see if there's any difference... none.
I suppose I could just 'pause' postfix & dovecot to prevent any new mail processing during the backup and then unpause those containers so that the load of restarting would be less? Is there another way to approach this? I don't like the idea of new mail entering and being processed during a backup since I assume that would lead to an inconsistent backup state between SQL and the stored mail in the vmail directory?
Any help would be appreciated. Once I get this sorted out, I'll post my process for anyone interested in using this as a backup option... assuming it works :-P Thanks in advance.
You might want to take a look at https://github.com/mailcow/mailcow-dockerized/issues/1575
Thx @ntimo I'll take a look at how they put that together, really appreciate your quick reply! I've been going nuts trying to get this figured out for the last week. However, it doesn't really address why taking docker down and putting it back up would cause a memory issue when there doesn't seem to be any problems with it just running. I'm wondering if docker-compose isn't getting enough time to 'release' used memory... in which case I could introduce a delay in my scripts.
I should note the process, in case that helps someone guide me:
docker-compose stop (all containers)
docker-compose start mysql-mailcow (start sql)
docker-compose exec mysql-mailcow mysqldump --default-character-set=utf8mb4 -u${DBUSER} -p${DBPASS} ${DBNAME} > "$sqlDumpDir/$sqlDumpFile" (dump sql to host)
docker-compose stop mysql-mailcow (stop sql)
docker-compose start redis-mailcow (start redis)
docker-compose exec redis-mailcow redis-cli save (save redis info in container volume)
docker-compose stop redis-mailcow (stop redis)
docker-compose start (start all containers)
I chose to use docker start/stop since I dont' need the network to go down and thought it would be less intense or a restart... pls correct me if I'm wrong and up/down is better.
I'm purposely avoiding putting my backup solution itself inside docker because I feel it adds needless complexity. Really, the memory issue is the only thing stopping me from getting things set up.
I appreciate any other input.
Also, looks like that referenced solution does not stop mailcow at all, it just does backups after dumping sql... does anyone know if mailcow has to be stopped at all? My worry is in the consistency of data. I don't want to be receiving emails halfway through the backup process. I'd prefer the mailserver was just 'down' and let the mail be queued for re-delivery from the sender.
Is there a way to just suspend mail processing for a bit without going through a whole container up/down?
Hey I would strongly suggest not using duplicati. It's coming a long way from where it was but there are still some issues, like for instance restoring is extremely slow, aka days or weeks.
I went down that route and now I use borg backup which is a branch from attic. It is absolutely fantastic and very fast.
@kilo42L Thanks for the suggestion! I haven't ever tried LARGE restores using Duplicati, but small ones like the size of my personal email server have only ever taken 15 minutes with Duplicati and backblaze b2. With other storage providers it's taken a long time. However, I have been interested in borg backup for a while... Can you recommend a good storage provider with ssh access? I've been having a hard time finding one with good reviews. Thanks :-)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
For anyone searching for this or the issues raised in this thread:
HTH someone in the future.
@kilo42L Thanks for the suggestion! I haven't ever tried LARGE restores using Duplicati, but small ones like the size of my personal email server have only ever taken 15 minutes with Duplicati and backblaze b2. With other storage providers it's taken a long time. However, I have been interested in borg backup for a while... Can you recommend a good storage provider with ssh access? I've been having a hard time finding one with good reviews. Thanks :-)
Sorry.. just saw this. I actually run a server at a datacenter and also use unraid for home use. I do backups from another business to both my home server and the colo computer so I dont need an external service.
Also, that is great about your script.. I will definitely check that out
Most helpful comment
For anyone searching for this or the issues raised in this thread:
HTH someone in the future.