The current migration system is quite limiting, because you often run into situations where just SQL is not enough to actually migrate the data in the way you need to.
A lot of similar libraries have a way of declaring migrations as code, and I'd love to see a similar design for diesel.
Something like:
trait Migration<B: Backend, C: Connection<Backend=B>> {
const key: &' static str;
fn up(con: &C) -> Result<(), Box<Error>;
fn down(con: &C>) -> Result<(), Box<Error>;
}
struct CreateUserTable;
impl Migration<Sqlite, SqliteConnection> for CreateUserTable {
const key: &'static str = "2017-12-12-21323-create-user-table";
fn up(con: &SqliteConnection) -> Result<..> {
let sql = "CREATE TABLE.... ;
CREATE INDEX ... ;
";
con.batch_execute(sql)?;
Ok(())
}
fn down ....
}
migration_manager![
CreateUserTable,
....
];
fn main() {
let con = ...;
let m = MigrationManager::migrate(con).unwrap();
}
The cli could have an auto-generator for migrations that would just put them inside a specified module to make it just as convenient as it is now.
- src
/migrations
/mod.rs -- contains the manager
/m2017_12_12_123123_create_user_table.rs -- each migration gets it's own file
This could easily be implemented out of tree of course, and I'll probably do so if neccessary.
But having it in diesel-cli would be very nice, so I wanted to see if this is something the maintainers could see being included.
In theory this should be already be possible with the current code.
Take a look at the docs of the Migration trait.
Also embed_migrations! already creates the code needed to embed the diesel-cli written migrations in the binary. What is missing is some kind of cli tool to actually print that code.
Oh I didn't realize there already was a Migration trait, thanks for pointing that out.
The point of making Migration generic over the Backend is to make it possible to use regular diesel code for changes that are not plain SQL. (Like update all columns in a table by mapping over the value, within a transaction, or whatever).
This also would make it possible to implement backend-specific migrations to support multiple backends.
This could of course also be done just by structuring the migrations differently, but it would be nice to implement Migration on the same struct for different backends.
But otherwise, yeah, all that would be needed is a macro that takes an array of migrations and generates the code for running them, and potentially an extension to diesel-cli for generating migrations.
@weiznich: https://docs.rs/migrations_internals/1.0.0-beta1/macro.embed_migrations.html on https://docs.rs/migrations_internals/1.0.0-beta1/migrations_internals/ does not work. https://docs.diesel.rs/diesel/macro.embed_migrations.html, found on Google, also doesn鈥檛. Do you know the correct docs link?
This has come up before, and I don't think this issue brings up anything new.
I've said in the past that for us to consider Rust migrations, there would need to be a really compelling use case. So far the only real use case that's been raised is "you can sometimes get down for free", which isn't a huge win. Writing database agnostic code is an explicit non-goal of the project. This is doubly true for schema management.
I have made sure that the code is structured so alternative migration forms can be added in the future, and we will continue to make that the case. Since this doesn't raise any new use cases that haven't been discussed, and there's nothing actionable here, I'm going to close this issue.
See #980, #336, #10, and gitter history for past discussions on this topic.
This is the one feature I look for that weaker ORMs do not implement: programmatic migrations.
@sgrif just for the record, to me this request was not about migrations declared as code.
I much prefer to write them as actual sql statements. I don't see much value in the "down for free" thing either.
BUT there are situations where you need to migrate the actual data, not the database schema, and which can't easily be done with SQL. Potentially with scripting the database a la pl/pgsql but it's usually much easier with regular application code.
This would be the prime use case for me.
@theduke I agree with you that it is a reasonable use case. I just don't think it's one that fits into Diesel. I've architected our migration infrastructure to make it easy for third party crates to come in with new functionality (see the barrel feature for an example).
@sgrif Out of curiosity, have you done any data migrations in a Rust project, eg. maybe using the barrel feature?
If we already have a table in the database, why should we dump the sql for the "migration"?
coming from a Node and .Net with EntityFramework I wasn't even thinking that Code -> Auto Migrations generation would even be a question.
This essentially just make Diesel a Query Build + more or less bash Scripts to Run SQL Files.
I'm sure for actual pros of SQL this would be a happy option but a newbie like me I can't write SQL to the point to manage a Production level entities with complex relations and UP and DOWN for those entities.
This is so disappointing for me to just abandon Rust a backend option for now.
@ahmad2smile I'm not sure what you are expecting by commenting on a semi related issue.
Beside of that: As stated above, it is/was an explicit design goal to not have an auto migration feature in diesel itself, so if you need such an feature diesel is probably not the library you want to use.
For anyone else interested in such a feature: I think it may be possible to implement something like this via a proc macro attribute that collects all table! definitions in a certain crate and infers changes from that. I do not think that this is a feature that belongs into diesel itself, for the reasons stated above by other core team members, but I do not see why it should be impossible for a third party to develop something like this on top of diesel. Also we would likely be open to support such a solution similarly to how barrel is currently supported in the cli.
Most helpful comment
@sgrif just for the record, to me this request was not about migrations declared as code.
I much prefer to write them as actual sql statements. I don't see much value in the "down for free" thing either.
BUT there are situations where you need to migrate the actual data, not the database schema, and which can't easily be done with SQL. Potentially with scripting the database a la pl/pgsql but it's usually much easier with regular application code.
This would be the prime use case for me.