Hello! Recently, knex moved to use mssql v4 which dropped support for queueing queries within transactions (and to me, and it honestly looks like they aren't willing to recreate it, either). This change means that you should perform only sequential operations within them. Now, onto the issue with Objection.js:
Here we have a explicit Promise.all which does things concurrently.
Now, we have two ways of solving this:
Promise.all, we replace with a series implementation (bluebird made this quite counterintuitive...): return Promise.reduce(allModelClasses, function (_, ModelClass) {
const table = builder.tableNameFor(ModelClass);
if (columnInfo[table]) {
return columnInfo[table];
} else {
columnInfo[table] = ModelClass.query()
.childQueryOf(builder)
.columnInfo()
.then(info => {
const result = {
columns: Object.keys(info)
};
columnInfo[table] = result;
return result;
});
return columnInfo[table];
}
}, null);
(the code above works, can PR if needed, I currently "solve" the problem by either patching this code or using another eager algorithm)
Now, I haven't looked much, but I suspect Promise.all is part of another feature which I'm not currently using which is graph inserts, which will probably suffer from the same problem with this transaction/mssql combo.
That's a really bizarre decision from mssql folks.
I'm not sure I want to remove all parallelism from objection because one db driver is acting weird. There are Promise.all calls all over the place.
They really need to fix this in mssql.
Maybe we could make the parallelism configurable? Maybe a Model.concurrency config that would default to 1 for mssql. Then we could swap all Promise.alls to Promise.map(func, { concurrency: Model.concurrency })
But the first thing should be to try to convince mssql contributors to realise their mistake. I don't believe there is any good reason for that decision. All other database drivers allow that. Could this possibly be handled in knex? @elhigu What do you think?
Configurable parallelism could work, as well as knex handling it instead.
I find it a weird backwards decision from the mssql devs (the author stated that the internal queue was deemed too confusing in https://github.com/patriksimek/node-mssql/issues/412#issuecomment-279710428), I could try making a PR to their project to reimplement the queue, because the change broke a lot of things in my end (and not only on my end by the looks of https://github.com/patriksimek/node-mssql/issues/302 and https://github.com/patriksimek/node-mssql/issues/491).
@fer22f I added Model.concurrency config. It should work for you for most use cases, but there are some corner cases where I couldn't use it. You will still run into trouble if you execute queries in model hooks inside transactions.
You should now be able to do something like this:
class BaseModel extends Model {
static get concurrency() {
return 1;
}
}
and then inherit your models from BaseModel.
c17054289f should fix this for the most part, but I'd still urge you to get mssql or knex developers to fix this.
Thank you! I appreciate the fast response time from the team mantaining this project, transitioning to it from raw knex has been awesome, while bookshelf won't cut it with their restricted primary key model.
You will still run into trouble if you execute queries in model hooks inside transactions.
Unfortunately I have run into exactly this, using a transaction with an insert graph, and all of the models having various hooks that require different things being patched on after I get the inserts back (because I need the ids...)
Is there an alternative solution that I might be able to hack together? The only thing I can think of at the moment is nix'ing all of my query hooks, and then when i get the returned graph back, running all of those queries in a series... it's really inconvenient and messy tho :(
There's nothing to be done in objection. Convince mssql or knex people to fix this.
I @koskimas,
I've encountered this issue again, using insertGraph method, when inserting arrays of things:
const user = await User.insertGraph({
name: 'My Name',
posts: [
{ title: 'Post1', ...},
{ title: 'Post2', ...}
]
});
I know this is a knex or mssql problem, but I'm looking for a workaround meanwhile. I've added the static get concurrency() { return 1; } to both User and Post models, but it doesn't work.
Any idea of how I can implement this behaviour?
Thanks a lot!
@danigb I can't reproduce that. Could you write a simple script I can run that reproduces the issue?
My exact code is this one:
const workflow = await Workflow.query(trx).insertGraph({
name: "Workflow",
permissions: { level: "admin", permission_code: "AAA" },
steps: [
{
name: "A",
participants: [
{ user_id: 21, role: "viewer" },
{ user_id: 22, role: "manager" }
]
}
]
});
Where Workflow, Steps and Participants are actual models with relation one-to-many each.
EDIT: if I remove the second participant, it works.
I can't run that. Create a standalone file I _can easily run_. The problem is not in the snippet you posted, but somewhere in your setup or other code.
What do you need exactly? Migrations, model definitions and query example? How can I write a good script for you to test?
I need a file I can run that reproduces the problem. Simple as that. Remember to actually run the file yourself to make sure it works and does reproduce the problem. You can use the reproduction_template.js file in the repo root if you want.
I have a test case for insertGraph with concurrency = 1 and it works as expected. There is something you are doing different from my test case and I need to figure out what it is.
Just to confirm @koskimas, c170542 removed concurrency across the board as the default?
let movies = await Movie
.query()
.where('name', 'like', '%Terminator%')
.eager('[actors, reviews]');
will run the three queries essentially in serial no matter which db driver I'm using?
And I need to update the concurrency prop on my base model to get the old behavior of having the actors and reviews queries run in parallel?
@mastermatt Yes, internal concurrency is disabled for now, but does that really matter? It increases the latency a little bit when your server is seeing little to no traffic, but with more traffic it should actually even out latency between similar requests because the connection pool isn't hogged by a single request doing a huge eager query. If you need the minimum latency and have relatively little traffic, you can get some speedup by setting concurrency to a bigger number.