Mastodon: Documented process for updating Mastodon with Docker does not work

Created on 10 Apr 2017  Â·  24Comments  Â·  Source: tootsuite/mastodon

Based on README.md it seems we should be able to rebuild updates live. I have not been able to do this - 'docker-compose build' only works if I have first executed 'docker-compose down'. I decided to try this just to get the build working. Due to #1376, 'docker-compose down' killed my DB and I had to start over. Luckily I'm the only user on my instance so far.

I am guessing I missed a crucial step that would have enabled 'docker-compose build' to succeed without taking Mastodon down, since other instances are getting updates without wiping all user data. Documentation needs more detail on the correct procedure.


  • [x] I searched or browsed the repo’s other issues to ensure this is not a duplicate.
bug

All 24 comments

I learned this too the hard way. Except it was while I was running an instance with 500 users, lol. The database isn't killed, it's under /var/lib/docker/volumes in one of the subdirectories. If you can figure out which one you can copy its contents into the new volume (find the current volume with docker inspect).

I'm not sure the right way to do this. Maybe use a named volume for the database?

argh, so it's docker-compose down that has killed my database a couple of times now? ugh. I thought that the database was defaulted to stored in a permanent space on the docker host.

so much for trying to shut everything down the "proper" way when the storms rolled in last night :-S

I'm not able to locate old copies in Windows, so far. any ideas how to find and restore?

I've done several rebuilds on them using docker-compose, but have ctrl-c stopped it while running attached .. and that worked. docker-compose down seems to have killed it, though. :|

Per comments on above, it seems the 'correct' method is to docker-compose stop/start, instead of down/up for future restarts?

To restart your containers you can use docker-compose restart.

If you rebuild an image and then use docker-compose up -d it will detect that the image has changed and upgrade seamlessly to it.

docker-compose down really got me because I was trying to debug an issue that I thought destroying the containers would help, but when they come back they are assigned new data volumes dynamically. Naming the volume here would help, but has the drawback of not letting you run multiple Mastodon instances on the same host without a separate compose file. Probably worth it right now to prevent data loss.

I'm considering moving back to Dokku deployment. It takes some initial setup but it's concept of "apps" keeps things more tightly linked together and avoids issues like this.

My problem was I couldn't get docker-compose build to complete while the server was running. It fails and aborts during part of the process - but succeeds if I execute docker-compose down first. Sounds like that's too big a hammer - but I'm still not sure what I should do instead. docker-compose stop?

How do you even run multiple instances? That's a trick I wanted to learn :)

One could set different database names for the different instances?

docker-compose down is a badly named function imo, but it's not recommended in any official Mastodon documentation as far as I know. down is actually "destroy". up is more like "create, or re-create outdated containers". What you wanted was probably stop, but the thing is, you don't actually need to stop anything while upgrading from one Mastodon version to another...

Correct me if I'm wrong, but with the default configuration, wouldn't a simple version change on the postgresql image cause a total database loss on the docker image, as it would build a new one? (i'm still a docker newbie) It seems the default should be to persist data externally to the images, as normally one expects to be able to nuke a docker volume and re-create it without anything really harmful happening.

As @alexgleason mentioned even without an explicitly assigned host path the data is stored on (anonymous) volumes, but yes it's a headache to recover it.

The biggest issue right now is if we update the default docker-compose.yml file with a new default volume path, everyone's database volumes will become orphaned and will require that bothersome recovery process.

Is there a link to this bothersome recovery process, so I might have a chance at recovering one of my previous database volumes? :-) Although I'm not too terribly worried about it, it was just a few users. I'm more worried about why my instance won't come back up functionally at all .. but, back to the topic at hand..

I did create a system using the existing file, bring it up, then add a named volume for the postgresql data to the docker-compose.yml, and when I did docker-compose up again, it threw a warning that it was not able to mount the external volume and would continue using the existing one. I did not test this extensively, but it was a warning not a fatal error or loss of data tables.

@ericblade

  1. Bring up the containers again docker-compose up -d
  2. Search in /var/lib/docker/volumes for the directory that contains the old postgres volume. You will probably need to guess and check, but ls -alt helps because it will sort by date. Look inside of each directory for a _data directory and make sure it contains postgres stuff. Save the fingerprint of the directory.
  3. Run docker inspect mastodon_db_1 and scroll through until you find the mounted volumes portion. Note the name of the volume that's currently mounted.
  4. docker-compose stop db
  5. Copy the contents of the old postgres volume into the one currently mounted by mastodon_db_1 rm -r /var/lib/docker/volumes/<step3fingerprint>/* && cp -r /var/lib/docker/volumes/<step2fingerprint>/* /var/lib/docker/volumes/<step3fingerprint>
  6. docker-compose start db

Probably worth noting for those of us that are Windows users :-) that Docker for Windows stores it's volumes inside the Virtual Machine that it is running the containers on.. so you can get to that instance in several ways it seems, one that worked for me was: docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -i sh

However, I don't have the old volumes anymore, just more recent ones. Oh well :(

@Gargron - Thanks for the explanation.

I've found if I run docker-compose build while the server is up and changes need to be made, it partially completes, then hangs. It eventually prints 'Killed' in big red letters and aborts. I wish I'd saved the console output from when I tried this earlier. It doesn't happen if there are no changes needed.

I had a similar experience when initially setting up the instance, too - I'd brought up the server, ran my rake secret tasks, and updated my env.production file. Reported 'Killed' every time I tried to rebuild until I ran docker-compose down. Then the build worked and I was able to bring the instance up again. In that case it was no big deal because I hadn't updated anything in the DB yet. I'm running on Ubuntu 16.04 if that helps.

@wolfteeth it sounds like you're running out of RAM, not a docker problem. Building after docker-compose down probably works because you have more free RAM. Are you on the DigitalOcean $5/mo? You could add swap space.

@alexgleason I'm on AWS and my server is supposed to have 1 GiB RAM. I think that's comparable to DO's $10/mo tier. I wasn't expecting resource issues at that level but it sounds plausible.

It would be great if we can suggest the admin to move docker storage to another path (dockerd -g $PATH) instead of /var/lib/docker by default. This can help from the server accidentally run out of disk space.

The following systemd conf might help.

  • /etc/systemd/system/docker.service.d/docker-storage.conf
[Service]
ExecStart= 
ExecStart=/usr/bin/dockerd -g /path/to/new/location/docker -H fd://

I did find adding swap space helped with my issues. It'd be nice to know more about the resource requirements. 1GB got very tight when I added even a couple of users.

so there's no way to update Mastodon running within Docker, without also losing data?

Do not use "docker-compose down", as that destroys the containers. Use "docker-compose stop" to shut down the instance. If you're initially configuring it, you can also configure the database to be preserved on a volume, see docker-compose.yml .. if you do that, then you can destroy all the containers you want, and recreating them will reattach your data. If you're already running without it, though, then you'll have to figure out how to copy the data from the existing container into the volume, before destroying any containers .. i'm not sure how exactly to do that.

I haven't uncommented the lines..... :(

Is there any way to stop the container, update Mastodon, enable DB persistence all while keeping the current data?

Yes, but I don't know what is involved with locating the existing database and copying it to the volume. Sorry.

If you do not "docker-compose down" or otherwise destroy your database container, you should be alright.

I run a quick cmd file that does:

docker-compose stop docker-compose build docker-compose run --rm web rails db:migrate docker-compose run --rm web rails assets:precompile docker-compose up -d
seems to work well.

In your situation, I'd still try to figure out how to get at the existing database, so you can back it up though.

Just to clarify, do you use that to update Mastodon?

And I assume you did enable DB persistence?

Op 18 apr. 2017 om 14:43 heeft Eric Blade notifications@github.com het volgende geschreven:

Yes, but I don't know what is involved with locating the existing database and copying it to the volume. Sorry.

If you do not "docker-compose down" or otherwise destroy your database container, you should be alright.

I run a quick cmd file that does:

docker-compose stop
docker-compose build
docker-compose run --rm web rails db:migrate
docker-compose run --rm web rails assets:precompile
docker-compose up -d
seems to work well.

In your situation, I'd still try to figure out how to get at the existing database, so you can back it up though.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

I do, and I did, but since that doesn't destroy the DB container it should be safe.. try it on a backup before taking my word though :)

So has anyone ever found a solution for actually fix this outside of copying data from volumes?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ghost picture ghost  Â·  3Comments

hugogameiro picture hugogameiro  Â·  3Comments

phryk picture phryk  Â·  3Comments

flukejones picture flukejones  Â·  3Comments

selfagency picture selfagency  Â·  3Comments