Node version:
Sails version _(sails)_:
ORM hook version _(sails-hook-orm)_: "sails-hook-orm": "2.1.1",
Sockets hook version _(sails-hook-sockets)_:
Organics hook version _(sails-hook-organics)_:
Grunt hook version _(sails-hook-grunt)_:
Uploads hook version _(sails-hook-uploads)_:
DB adapter & version _(e.g. [email protected])_: "sails-postgresql": "1.0.1"
Skipper adapter & version _(e.g. [email protected])_:
I just had a serious issue using the destroy method and wanted to share with you guys and see what can be done.
I know that the destroy method drops the table if passed empty object, and that is documented. I think this is a bad thing in and of itself.
The bigger issue is, and that's what caused my problem is that object doesn't have to be passed like this {} to cause the drop, it can go in like this Table.destroy({ id: undefined }). I'm pretty sure I even tested this case with one of the previous versions of the library and that it didn't use to work this way.
My suggestion is at the very least to document this case if not separate destroy and drop methods.
Hope this makes sense and thanks for the great work on this library. Let me know if I can help in any way.
@maximovicd Thanks for posting! We'll take a look as soon as possible.
In the mean time, there are a few ways you can help speed things along:
Please remember: never post in a public forum if you believe you've found a genuine security vulnerability. Instead, disclose it responsibly.
For help with questions about Sails, click here.
This is intentional behaviour: https://github.com/balderdashy/sails/issues/4639#issuecomment-320369193
Things could definitely be explained more clearly, and perhaps a safety mechanism introduced to prevent this, though I feel the current state of things allows for convenient query construction (as mentioned in the linked comment), and it'd be a shame to lose that.
I recommend simply using null in queries, as undefined doesn't have the same meaning, or the meaning one might expect. Any dynamic data passed in should of course be validated.
An alternative, automatic solution is to simply use something like sails-hook-safe-orm.
Thanks for the reply! I have nothing against the policy in and of itself, but I feel like having it working with destroy method for consistency's sake is wrong.
I'll gladly use null, the thing is my query parameter is rarely literal. Say I have this:
if (onceInACoupleOfMonths) {
Model.destroy({ id: modelID }) // oops, forgot that it's not initialized, and there goes the table
}
const modelID = 'someID'
Model.destroy({ id: modelID }) // all cool
and it doesn't matter how complex your query is. You get an object that's undefined enough, you can trigger the destroy.
Additionally, the workaround is not complex at all, something like this would make it all good:
const safeDestroy = (Model, parameters) => {
const allDefined = (ob) => {
if (typeof ob !== 'object') return ob && true;
if (!Object.keys(ob).every(p => allDefined(ob[p]))) {
return false;
}
return true;
};
if (!allDefined(parameters)) throw new Error('Unsafe destroy!');
return Model.destroy(parameters);
};
It might be that it's just my opinion that's wrong, but I needed to put it out there. You can take it into account if you want.
Absolutely, I understand the concern and completely respect your opinion; I'm only sharing mine here as well, perhaps playing a little Devil's advocate.
My stance is likely based on a habit of building service methods to issue queries, with aggressive input validation. We've got to make sure user input is safe, so we might as well ensure we've not goofed up somewhere as well. Coupled with comprehensive tests this keeps things all safe.
Of course, I can easily appreciate the issue raised here. There's the potential for users to seriously shoot themselves in the feet with this, and some sort of safety option would go a long way to keep this from happening.
It'd be awesome if we could get a config field or .meta() option that adjusts how things should work in regards to this sort of scenario. While any such field would have to default to the current behaviour, it would at least allow enabling extra safety until the next major version where breaking changes can be made. This would also allow us to opt-in to the old behaviour should there ever be a change to this default.
(I very much would like to retain some means of relying on the current way of things, even though they may not represent the safest potential default)
Your workaround demonstrates that this problem can be avoided with little hassle, though there are some minor issues with it:
if (typeof ob !== 'object') return ob && true;
This line will return false given a falsey value such as '', 0, etc.
Because null input fields aren't caught by the above if-statement, they proceed through to:
if (!Object.keys(ob).every(p => allDefined(ob[p]))) {
Which will trigger an error on Object.keys(...).
Decided to whip up a little workaround helper inspired by yours, if anyone's inclined to monkey-patch things rather than going the sails-hook-safe-orm route or similar:
Gist Rua-Yuki/1058043ac644c1e21d605756e34f2038
I played around a bit with three different modes of operation, each of which I think has its merits. Configured via sails.config.models.undefinedTolerance:
null: (default) transforms undefined values to null.throw: immediately throws an error upon encountering undefined values.ignore: has corresponding fields stripped from the query; take caution as this can result in less-specific queries than intended.Thanks, love the flexibility of your approach, will go with something like that.
I'm glad we are on the same page as far as the importance of this goes. As much as it is true I'd like to have more validated service calls, the truth is mistakes are made.
I think having some kind of way to opt-out of behavior, as you suggest, would be more than a great start. I'll have to check if a similar issue can happen with updates (overwriting everything with the same record) and fix it too.
@maximovicd @Rua-Yuki Great discussion.
I'll have to check if a similar issue can happen with updates (overwriting everything with the same record) and fix it too.
Same thing is true for .update() and .archive().
I'm glad we are on the same page as far as the importance of this goes. As much as it is true I'd like to have more validated service calls, the truth is mistakes are made.
馃挴
@mikermcneil and I have been bitten by this twice ourselves on Sails Co's projects. We introduced destroyOne(), archiveOne(), and updateOne() as an incremental step, but more needs to be done. Longer term, we've discussed moving towards explicit destroyAll(), updateAll(), and archiveAll() methods. This, at least, makes the potential more explicit in the code.
Thanks for sharing your workaround!
@rachaelshaw @mikermcneil There has been a lot of conversations around this in https://github.com/balderdashy/sails/issues/4639, from myself and @Josebaseba and other people. Right now, this is one of my concerns on migration from our 0.12 version to 1.x. We could monkey patch it, or use another library, but we believe having the option to avoid that behavior is necessary.
I'm working on production with a fork of the orm hook that throws an error if you pass an undefined value. I think that we should have that option in the config/models.js file allowUndefinedQueries: false // default to true or something similar.
The destroyAll(), updateAll() methods still dangerous even if you still know that they can update/destroy all the records if you have a bug that you didn't see. I shot myself updating a entire collection instead of one record in prod :)
For example, having a res.send() without a return and after that trying to access to the req.param('id') that previously had a value, you'll find out that after the res.send the value is an undefined:
req.param('id'); // -> 1234
if (!something) {
res.badRequest();
}
req.param('id'); // -> undefined
await User.update({id: req.param('id')}).set({name: 'Joe'}); // Now you can cry in silence