The use of the delete
operator has performance negative effects for the V8 hidden classes pattern.
In general it's recommended do not use it.
Alternatively, to remove object own enumerable properties, we could create a new object copy without those properties (example using lodash):
_.omit(o, 'prop', 'prop2')
Or even define the property value to null
or undefined
(which is implicitly ignored when serializing to JSON):
o.prop = undefined
Sounds like a reasonable request. Here's some benchmarks: http://jsperf.com/delete-vs-undefined-vs-null/16
Seems setting to undefined is the most ideal case if it's not too problematic to change.
I am not against code optimisations, but I always believed there should be a balance between code readability and clarity and actual performance gains when deciding whether or not an optimisation should be done to code.
In this particular case, we call delete
about 5 times (did not count properly) before each API request which usually takes about 200-600ms to complete. I am skeptic on the performance boost gained by ditching delete
in favour of setting a property to undefined
. In practice, we are probably talking about gains of 1ms/request at most.:)
That being said, lib/apirequest.js
sure could use some rewrite! (that's where most delete
calls are)
Yep you make a fair point which is why I never second guessed using delete in the first place. That being said, if using it is relatively easy to switch, its always better to do that. Readability of delete options.x
vs. options.x = undefined
is equal in my opinion.
@Alaneor I agree with you about this if you're thinking about premature optimizations, which in fact could be clearly considered as an anti-pattern, but this case is a bit particular and misunderstood. Let me explain that.
The issue with delete
operator, which is specifically applicable (but not limited) to the V8 engine, removes all the benefits of the hidden classes strategy, which is one of the powerful features that makes V8 an efficient JIT compiler. Taking into account node.js/io.js runtime is based in V8, this particular issue should be considered.
From the other hand, the cons of the delete operator is not only applicable to the current code operation/block where it's used, it has other performance issues in the program live-cycle for those objects which remains in the memory stack/head and are not flushed by the garbage collector. The jsPerf test suite just recreates an atomic performance test with no program live-cycle.
Taking into account this package is commonly used in production by thousands of developers and companies, this kind of minor code optimizations is highly appreciated, and just because it cames from Google, and in general the people has a blind faith and expect the best quality from they (this is not a criticism, in fact it's the opposite. People love Google and it's an evidence).
Regarding to the code readability, in this particular case I think doesn't makes any sense, due to the syntax noise is essentially the same, and the expressiveness is actually equal. In any case you can always create a helper function to provide more grammatical expressiveness.
And in general I agree about following the YAGNI
pragmatic thinking by default.
I've created the issue just after reviewing the code in a couple of modules, and noticing a bit of abuse of this operator. In general, most of the JavaScript hackers rarely use this operator, specially if his code is part of a package which is widely used by the community.
Here are some articles that notice the same performance issue with a better technical explanation and endorsement:
http://www.html5rocks.com/en/tutorials/speed/v8/
http://www.future-processing.pl/blog/javascript-is-slow/
https://github.com/skeeto/resurrect-js#options
@h2non We always welcome pull requests, especially for small optimisations and fixes such as this. This would be a good first bug.
Thanks @ryanseys. Sure! probably I'll send a PR with some improves when I've free time
@h2non @ryanseys @jmdobry I would like to work on this and send a PR.
Before doing any sort of optimizations, we should profile the API and find out where the big perf hits exist. Blindly making a change like this could help, it could make things slower. Until we do an analysis and put a measurement framework in place, its impossible to know.
I'm going to close this out for now.
@brandondees @tmornini you guys are great at reminding me about premature optimizations. @JustinBeckwith raises a great point. You don't know what you don't know. 馃憤
@JustinBeckwith @snuggs If you can't measure it, you shouldn't optimize it.
So, if you really want to optimize your code, measure everything. 馃槃
You can also delete with destructuring:
const obj1 = { a: 1, b: 2 }
const { a, ...obj2 } = obj1
console.log(obj2) // { b: 2 }
@justinmchase this is inefficient as it creates an additional object (and potentially wakes up the garbage collector).
@snuggs how so?
@ug02fast delete
being slow is the platform's fault. That's a THEM problem.
spread-ing is just that. Creating an extra object for the sake of performance. That's an OUR problem (introduced unnecessarily for the sake of performance).
You use delete
your code will get faster over time (and patches). Using spread becomes technical debt over time. More right _(now)_ than "right". Spread has a purpose...Unknown object shape. optimization diverges from that purpose. Actually it's the inverse. You know the shape of obj1
, just want it to be "faster". I agree with @tmornini re benchmarking. IMHO the only time we should reinvent the platform is for polyfilling.
Hope this helps.
@ug02fast as an addendum come to think of it there are TWO unnecessary objects. obj2
AND a
.
The garbage man cometh for obj1
_(the object you actually care to work with)_ AND a
. If you are familiar with how the GC works not only may it be slower but now your code is nondeterministically slow(er at times you least suspect).
Most helpful comment
You can also delete with destructuring: