I'm curious how people are running their migrations in production. I haven't seen any documentation about best practices there.
I'd be interested in having my application at start up run the migrations as well, are there any code samples for that.
Thank you
I'd be interested in having my application at start up run the migrations as well, are there any code samples for that.
We provide the run_pending_migrations function which you can use to run your migrations at app startup. I don't think there have been any best practices established yet. Certainly at minimum calling that function from main is a pretty reasonable practice.
We also have the embed_migrations! macro which puts the migration SQL in your binary to remove the need to have them on disk (and is much worse documented than I thought... The way you use the generated code from that macro is embedded_migrations::run(&conn) for the record.) Unsure if we want to consider using that macro a best practice or not for typical server deployments.
Interested in other people's thoughts/stories around this.
I believe that in order to safely run migrations you need to be using the table! macro, rather than infer_schema!, right?
I would love to see more docs around migrating from infer_schema!, which the getting started guide uses, and towards table!, which is what (I assume) you should be using in production. Even just linking to the table! macro from infer_schema! would have helped me when I was getting started.
infer_schema! is entirely evaluated at compile time. There is no distinction between "production" and "not production" for those purposes. It's fine to run your migrations in main if you're using infer_schema!. The migrations will have to have been run on the compilation host prior to compilation (which can include running migrations from build.rs).
I'll add a data point here - we're using embed_migrations! to ship our migrations to production, as we used docker as a deployment mechanism, and having everything in a single binary makes things easier.
We're not yet deploying multiple instances of the container, so haven't run into any potential problems with multiple instances running migrations at once, but a bit of thought says that they should be okay, and if not, the migrations could easily be wrapped with a PG advisory lock or similar.
So far this approach works very well for us.
I'm clearing out old issues, and I'm going to close this as there's not much discussion still happening here.
To add a final datapoint:
diesel print-schema is now the recommended way to use Diesel. infer_schema! is still around, and it's useful for development when you're just getting started, and your schema churns a lot, but diesel print-schema! significantly simplifies deployment since you no longer need a database running to compile/deploy the application.
The way we run migrations on crates.io is literally by having our "run" task be diesel migration run && start-server
For people coming here looking for an up-to-date way of embedding migrations (with diesel 1.x), the current way of doing it is:
diesel_migrations to your dependencieextern crate diesel_migrations in your crate, and make sure to decorate it with #[macro_use]embed_migrations!()embedded_migrations::run(&db_conn)In case anyone finds this useful I've recently added a Diesel CLI container with some documentation for running things the Docker way: willsquire/diesel-cli.
I like the idea of doing it on app startup, but in a cluster it might be helpful to have some separation.
Question for yall, lets say you run embed_migrations!(); from the code in docker, but then you find you need to do a rollback of a specific migration in production? How would you handle this?
Currently if this happened to me, I'd just create another "fix" migration, since I don't know how to do rollbacks from a docker instance without the diesel_cli.
Edit: thought of a simpler way to put this.
How do you roll back migrations in production?
Once migrations are already in production AFAIK you have two choices:
When I say "manual rollback" I mean setting up diesel-cli against your production DB URL and running revert on it.
The way I see it, option 1 is by far the preferable way to handle this situation. This is both because the revert/fix/upgrade dance is very risky running against a production DB, and also because down migrations are rarely given proper thought and testing and are often broken. I even got to see several incidents in which the down migrations were just blank because developers didn't believe they would need them. Finding this out all of a sudden on a production DB is not an ideal situation to be in.
Most helpful comment
For people coming here looking for an up-to-date way of embedding migrations (with diesel 1.x), the current way of doing it is:
diesel_migrationsto your dependencieextern crate diesel_migrationsin your crate, and make sure to decorate it with#[macro_use]embed_migrations!()embedded_migrations::run(&db_conn)