import {merge} from "ramda";
console.log(
merge(
{age: 30, tags: ["a"] },
{tags: ["b"] }
)
); // {age: 30, tags: ["b"]} -- ok
console.log(
merge(
{model: {age: 30, tags: ["a"] }},
{model: {tags: ["b"] }}
)
); // {model: { tags: ["b"] }} -- ??? where is age ???
So now merge behaves like Object.assign and does not merge nested objects.
console.log(
Object.assign(
{},
{model: {age: 30, tags: ["a"] }},
{model: {tags: ["b"] }}
)
); // {model: { tags: ["b"] }} -- this is still lame but expectable from Object.assign
The note from docs
This function will not mutate passed-in objects
is not clear. Do we talk about being immutable or do we talk about mimicking Object.assign behavior?
If it's expected behavior I'd recommend to reconsider it.
To my experience, the business cases where merges of nested objects are required are
much more common. And, even more important, you may emulate "shallow merge" behavior with "deep merge" but not vice versa.
// emulate SHALLOW merge for specific field
result = assoc("model", data2.model, merge(data1, data2));
just because you want mergeDeep does not mean "merge is broken".
For the short-term, I recommend we explicitly document that merge is shallow, not recursive.
For the longer term, we should decide if we want to tackle this problem. I think it is thornier than it may appear on the surface. Have to handle cases for constructed objects, undefined properties, cycle detection, clobbering with different types, etc.
Who wants to take a whack at it? (not it!)
Not it!
There are several requests that have come up multiple times. Some, like foldObj are shot down for solid API reasons. Some, like a deep clone, were delayed because of implementation difficulties. This one is a little trickier. @paldepind has mostly convinced me that there are no technical issues with mergeDeep that don't also plague our current shallow merge. And yet the uses and limitations of our current merge seem clear in my mind, but I can't figure them out for a deep merge.
What happens with cyclic references? Copying the references at the root doesn't seem to be an issue, but how about if we want to walk the tree?
What happens with constructed object properties and their prototypal members? Do we duplicate all prototypal values in the merged object? Ignore them?
It's not that I don't think we can solve these issues. It's that I think these are difficult and yet uninteresting questions. So it makes it hard for me to want to work on them.
So @ivan-kleshnin, are you interested in working on an implementation?
Update:@buzzdecafe pointed out that this might sound a bit obnoxious. It's an uninteresting problem to me. That doesn't mean that others will find it so. I'm hoping that @ivan-kleshnin might find it more to his liking. Or that @paldepind might, as I believe he was requesting the same thing in Gitter.
Is the reason for not doing this that it is complicated? Is it not possible to take an existing version, shamelessly copy it and the ramdaify it if needed?
I actually intended to submit a PR for with a deepMerge to put some action behind my words. But I haven't got around to do it.
This is how I would implement deepMerge:
1/ Deep merge recurses into arrays and non-atomic objects. Non-atomic objects are objects that have keys in the eyes of Object.keys. It will for instance recurse into an object literal but not into a date object.
2/ Values from objects are copied into a new plain object.
3/ Value from arrays are copied into a new array. Sparse arrays are not handled.
I think the above rules are very simple both to understand and to implement. They don't promise too much. The implementation will do what users expects but not attempt to do more than that (deal with inherited properties for instance).
What do you think?
@paldepind I look forward to seeing what you come up with. I think you will need cycle detection. And the name should probably be mergeDeep as we have cloneDeep
I think you will need cycle detection.
You don't think a stack error is good enough?
And the name should probably be mergeDeep as we have cloneDeep
That I have no opinion on. But where is cloneDeep?
But where is cloneDeep?
my mistake -- we dropped shallow clone and renamed cloneDeep to clone. Even I can't keep up with all the API changes
we do have eqDeep. and it makes sense that merge and mergeDeep should be next to eachother
Your approach sounds good to me, @paldepind!
Yeah "broken" is an overstatement, so I renamed thread to be neutral.
1/ Deep merge recurses into arrays and non-atomic objects. Non-atomic objects are objects that have keys in the eyes of Object.keys. It will for instance recurse into an object literal but not into a date object.
2/ Values from objects are copied into a new plain object.
3/ Value from arrays are copied into a new array. Sparse arrays are not handled.I think the above rules are very simple both to understand and to implement. They don't promise too much. The implementation will do what users expects but not attempt to do more than that (deal with inherited properties for instance).
Sounds correct, but does not cover interesting question how to deal with arrays. Let's say we have:
a = {tags: ["x"]}
b = {tags: ["y"]}
c = merge(a, b)
Will we concat tags as Lodash does by default? Will we replace tags with second version as ImmutableJS does? Will we provide callback arguments to customize this process like Lodash does?
Just checked, Underscore behaves the same as Ramda now.
I personally find concatenation the least expectable default behavior. I asked @jdalton to change it in Lodash but my proposal was rejected. Optional callbacks are probably not in the Ramda style.
The reason why I said it's "broken" is that I find all this difference between shallow and deep operations fairly synthetic. I'm not aware of such classification in referential languages like Clojure or Haskell. So I would go with deep equality, merging, cloning and remove shallow counterparts completely. I'm sure you had some reasons to did it in a way it is (performance or something). Of course the JS is broken in a first place. But I still can't force myself to like it.
my mistake -- we dropped shallow clone and renamed cloneDeep to clone. Even I can't keep up with all the API changes
That's it. How about making the same with equality and merging :smiling_imp: ?
Will we concat tags as Lodash does by default?
Just fyi, lodash doesn't concat arrays when merging.
Yes, I refreshed that old issue in memory. It was not about concatenation.
It was about unpredictable merge behavior:
OMG, Ramda also does this:
import Underscore from "underscore";
import Lodash from "lodash";
import Immutable from "immutable";
import Mori from "mori";
let modelA = {
tags: ["foo"],
};
let modelB = {
tags: [],
};
console.log(Object.assign({}, modelA, modelB));
// {tags: []}
console.log(Underscore.extend({}, modelA, modelB));
// {tags: []}
console.log(Immutable.Map(modelA).merge(modelB).toJS());
// {tags: []}
console.log(Mori.toJs(Mori.merge(Mori.toClj(modelA), Mori.toClj(modelB))));
// {tags: []}
console.log(Lodash.merge({}, modelA, modelB));
// {tags: ['foo']} -- ^_^ wtf?
console.log(Ramda.merge({}, modelA, modelB));
// {tags: ['foo']} -- ^_^ wtf?
Well, it's quite noteworthy that I raise the same issue with the same name for different lib.
There is some doom in it.
This is ridiculous. I want to revert back "broken" title.
Python:
dict({"tags": ["foo"]}, **{"tags": []})
# {"tags": []}
Clojure:
(merge {:tags ["foo"]} {:tags []})
; {:tags []}
Did you guys just copy Lodash implementation for that func? ;)
I think you're mixing up merge and assign. In lodash, at least, merge will attempt to merge values while assign/extend/(immutable merge) will assign values (overwriting previous values).
I think you confuses merge and something bizarre where second argument overrides first but sometimes first overrides second.
Naw ;)
Python:
dict({"tags": ["foo"]}, **{"tags": []}) # {"tags": []}Clojure:
(merge {:tags ["foo"]} {:tags []}) ; {:tags []}
Ramda:
R.merge({tags: ['foo']}, {tags: []})
// {tags: []}
This looks consistent to me. This is the behaviour I expect from a shallow merge function.
console.log(Ramda.merge({}, modelA, modelB));
Ramda's merge only takes two arguments, modelB is never considered here. There is also mergeAll.
A number of libraries, going back at least to jQuery's extend used this function to mutate the first parameter, which lead to the pattern of passing an empty object as the first parameter. I'm sure that's what R.merge({}, modelA, modelB) is all about. Ramda never mutates user data, so this is not an issue. As @kedashoe says, just use R.merge(modelA, modelB), and we should get similar behavior to most of the other libraries and languages. The array concatenation behavior is odd.
@davidchambers I made a typo
Example should look like:
R.merge({}, {tags: ['foo']}, {tags: []})
// {tags: ['foo']}
But as people already correct me:
Ramda's merge only takes two arguments, modelB is never considered here.
Yeah, that's right. It was late and I lost attention. Sorry about misinformation.
Still, what do you guys think about this one:
my mistake -- we dropped shallow clone and renamed cloneDeep to clone. Even I can't keep up with all the API changes
That's it. How about making the same with equality and merging :smiling_imp: ?
My rant about it.
Equality is equality. Objects are either equal or not. All this deep vs shallow comparison is purely imlementation specific. It's brittle. How should deepEq behave on scalar values? Throw? Fallback to eq? Did we make a mistake or did we want a shortcut?
IMO, functional programming is mostly about data, not even functions.
So we win when we concentrate on data, not it's representation.
Simple example:
user=> (= '(1 2 3) [1 2 3])
true
What's going on here? In Clojure there is two datatypes: lists and vectors. They have fundamentally different memory representation (linked lists vs trees, that's about memory persistance if you care).
But at comparison Clojure says they are equal. Because they contain the same data.
Smart reader will interrupt me and ask about strings. Aren't those the same data but with different represantation he questions:
(= "123" ["1" "2" "3"])
false
Good catch. Strings are arrays of chars. Clojure fails here because it was made upon Java and inherited bad parts from it. Performance was improved but along with that we've got a ton of uncomfortable consequences: you can't even use the same functions on arrays and strings to begin with.
Best thought-out languages like Haskell or Racket which weren't bound by compatibility issues chose to treat strings as an arrays of chars.
Meanwile in JS people can't even decide whether to believe about two empty arrays being equal:
What makes two arrays equal for you? Same elements? Same order of elements? Encoding as JSON only works as long as the element of the array can be serialized to JSON. If the array can contain objects, how deep would you go? When are two objects "equal"? – Felix Kling Oct 20 '11 at 14:31 source...
This is the most ridiculous and absurd programming statement I ever heard. I always cite it.
That guy was so blinded by JS specific implementation so he totally forgot about reality. About math.
We don't need "objects" and other poor abstractions. We have collections and maps. Which can and should be compared by data they "wrap". And only by that. Order is a natural data characteristic. Some data are ordered, some – aren't.
So, back to original question. Is there something useful in providing both eq and deepEq?
Did you guys think about gt, lt and other useful primitives for type-safe data-oriented comparison?
gt(Number(1), 2) == false
Will you also provide deepGt, deepLt to be consistent :wink:?
I bet you not.
I'm afraid I may need to rant about your rant. :wink:
Equality is equality. Objects are either equal or not. All this deep vs shallow comparison is purely imlementation specific. It's brittle.
We live in a world of implementations. People use Ramda only inside Javascript environments... obviously. When you design an API, you have to decide what restrictions you're going to live with. Ramda has from the beginning been a library that has tried to work with the grain of the language. We certainly pick the parts of the language that interest us, but we're not trying to be Haskell in Javascript clothing. This means that we have to consider questions that deluge the Javascript ecosystem, including ones about references and whether two distinct handles refer to the same item.
IMO, functional programming is mostly about data, not even functions.
So we win when we concentrate on data, not it's representation.Simple example:
user=> (= '(1 2 3) [1 2 3]) true
You make it sound as though any container that holds the values 1, 2, and 3 in that order should compare equal. Is that really your position? For it seems absurd to me. It's quite easy to imagine a binaryTree function that works like this:
var x = binaryTree(binaryTree(1, 2), binaryTree(3));
var y = binaryTree(binaryTree(1), binaryTree(2, 3));
x.display(); //=> (1, 2, 3)
y.display(); //=> (1, 2, 3)
x.add(4); y.add(4);
x.display(); //=> (1, 2, 3, 4)
y.display(); //=> (1, 4, 2, 3)
foldl1(average, x); //=> 25/8
foldl1(average, y); //=> 21/8
Meanwile in JS people can't even decide whether to believe about two empty arrays being equal:
What makes two arrays equal for you? Same elements? Same order of elements? Encoding as JSON only works as long as the element of the array can be serialized to JSON. If the array can contain objects, how deep would you go? When are two objects "equal"? – Felix Kling Oct 20 '11 at 14:31 source...
This is the most ridiculous and absurd programming statement I ever heard. I always cite it.
That guy was so blinded by JS specific implementation so he totally forgot about reality.
No, that guy (one of the most respected members of the StackOverflow Javascript community, BTW) was answering a question titled "Comparing two arrays in Javascript" (emphasis added) posed by someone who was unsurprised that a simple == doesn't work. His answer was on the mark, given the context of the JS language. If you answered that question with a lecture about value equality and the interesting trade-offs made by Clojure that were unnecessary in the purer Haskell and Racket, yours would be the most ridiculous and absurd answer imaginable. ("How can I clean the fuel injectors for my 2004 Ford Taurus with a 3.8L V6?" "Well, in an ideal world, no one would be driving internal combustion engine vehicles, so there would be no need for any fuel delivery systems whatsoever." "Sure, dude.")
So what, to your mind, should the result of this snippet be?
var a = [];
var b = [];
a.push(b);
b.push(a);
eq(a, b); //=> ??
So, back to original question. Is there something useful in providing both
eqanddeepEq?
Well, they answer very different questions when it comes to non-primitive values. If
eqDeep(a, b); //=> true
Then for any fn,
eqDeep(fn(a), fn(b)); //=> true
var a2 = clone(a);
eqDeep(a, a2); //=> true
But if
eq(a, b); //=> true
then
eqDeep(a, b); //=> true
var a2 = clone(a);
var b2 = mutate(b);
eq(fn(a), fn(b2)); //=> true
eqDeep(fn(a), fn(a2)); //=> false
I guess whether that seems useful is in the eyes of the beholder. But for anyone who is working with any mutable data, reference equality may well turn out to be important.
Equality is equality. Objects are either equal or not. All this deep vs shallow comparison is purely imlementation specific. It's brittle.
I think you're very right if we only talk about _values_ (we values of course can't change, 7 is always 7). And that's all you have in a purely functional programming language. But in JavaScript we also have pointers to values. And then you suddenly have a new type of equality. Pointers can be considered equal if they point to the same value or they can be considered equal if they point to the same location in memory. I.e. equality is not as simple as you make it when the language you're using supports pointers to mutable values.
I think you confuses merge and something bizarre where second argument overrides first but sometimes first overrides second.
No, that is not what is going on. The array elements from the second overrides the first. Thus when there are no elements in the array in the object in the second parameter nothing will be overridden in the resulting object. I think the lodash behavior makes sense.
@paldepind
No, that is not what is going on. The array elements from the second overrides the first. Thus when there are no elements in the array in the object in the second parameter nothing will be overridden in the resulting object. I think the lodash behavior makes sense.
That means Lodash treats Arrays like Objects and offsets like keys in this exact case. This is understandable only from the view of JS implementation. But is weird from all other point of views. Lodash just totally lost Array (the only ordered collection!) concept by this decision. This behavior has no sense. Can you provide any business or other use-case when this behavior would be desirable? I think you never need to literally merge arrays because offsets are not keys. You always either replace or push / concat. Never heard of language which has a command to "overlap" arrays based on their offsets.
overlap([], ['x']) == overlap(['x'], []) == ['x']
I think you're very right if we only talk about values (we values of course can't change, 7 is always 7). And that's all you have in a purely functional programming language. But in JavaScript we also have pointers to values. And then you suddenly have a new type of equality. Pointers can be considered equal if they point to the same value or they can be considered equal if they point to the same location in memory. I.e. equality is not as simple as you make it when the language you're using supports pointers to mutable values.
Well, of course there're cross-data pointers. But a doubt that a library which provides and promotes functional approach to JS should be concerned with that. Do you handle cases where developer put something strange (like an object of his own class) to the data? No you believe that's his own problem to send "broken" data to the Ramda. Same thing with links. Just ignore them. With memory leak if it's inevitable. It's just not your business.
@CrossEye
You make it sound as though any container that holds the values 1, 2, and 3 in that order should compare equal. Is that really your position? For it seems absurd to me.
Not "any container". Any container that meant to represent plain sequential data for user eyes.
It's quite easy to imagine a binaryTree function that works like this
I think you're confusing data from the user and machine point of view. If you use trees to implement sequences they should behave like sequences. If you use trees to implement graphs they have bad console.log in your case. If you're somewhere in the middle – you're doing something wrong.
No, that guy (one of the most respected members of the StackOverflow Javascript community, BTW) was answering a question titled "Comparing two arrays in Javascript" (emphasis added) posed by someone who was unsurprised that a simple == doesn't work. His answer was on the mark, given the context of the JS language. If you answered that question with a lecture about value equality and the interesting trade-offs made by Clojure that were unnecessary in the purer Haskell and Racket, yours would be the most ridiculous and absurd answer imaginable. ("How can I clean the fuel injectors for my 2004 Ford Taurus with a 3.8L V6?" "Well, in an ideal world, no one would be driving internal combustion engine vehicles, so there would be no need for any fuel delivery systems whatsoever." "Sure, dude.")
Well, I don't agree at all. I hate when people make you think there is something wrong with you, where's it's definitely something wrong with their tool. He should tell something like:
Good question, pal. JS treats arrays just like objects (because
Arrayhas anObjectin prototype chain, you know) So your question should be really why{} != {}.
And then explain it as an OOP heavy legacy. His answer was totally opposite.
Like that's a normal thing and fairly logical that two empty arrays are different.
In the languages like Scala then, where numbers are also objects assert(1 != 1, true)
should pass :laughing:
There are two strategies in teaching programming:
1) Making people adapt. Hides bad parts to be _community-friendly_
2) Making people learn. Includes bad parts _awareness_
First one is good for CTO. Bad for humanity.
So what, to your mind, should the result of this snippet be?
var a = []; var b = [];
a.push(b); b.push(a);
eq(a, b); //=> ??
In the perfect world: exception with Ramda detected cross-data reference link. Cleanup your data, dude
In the real world: I really don't care much how Ramda will handle this. I'm ok with stack overflow. With operator overloading and other magical stuff coming to JS I imagine there will be more and more "edge cases" like this. Let the people shot in their legs if they want to.
I guess whether that seems useful is in the eyes of the beholder. But for anyone who is working with any mutable data, reference equality may well turn out to be important.
If you want Ramda to cover mutable data cases, then of course a lot of what I've said is inapplicable.
Maybe I've got Immutability and side-effect free functions are at the heart of its design philosophy declaration in the docs too _literally_ :smile:
P.S
I'm inspired by functions like is because I import them without namespacing.
import {is} from "ramda";
is(Number, 123);
is(Array, [123]);
...
Thats much shorter than
import {isArray, isNumber, ...........................} from "lodash";
isNumber(123);
isArray([123]);
...
This is one more reason why eq vs eqDeep distincion makes me unhappy.
That means Lodash treats Arrays like Objects and offsets like keys in this exact case. This is understandable only from the view of JS implementation.
... which, by itself, does not make this an invalid idea.
I agree with you here. At least from Ramda's point of view, where we mostly want to think of Arrays as lists, this behavior is fairly odd:
var a = {tags: ['foo', 'bar', 'baz']};
var b = {tags: ['qux', 'norf']};
delete b.tags[0];
_.merge({}, a, b); //=> {tags: [undefined, 'norf', 'baz']}
(Note especially the undefined first element. Even though '0' in b.tags; //=> false, its non-existent element still overrides the one from a.tags.)
I would find simply replacing the first array with the second and not descending into it to be a bit more sane. And I guess this is a response to @paldepind, too, then.
@ivan-kleshnin
You make it sound as though any container that holds the values 1, 2, and 3 in that order should compare equal. Is that really your position? For it seems absurd to me.
Not "any container". Any container that meant to represent plain sequential data for user eyes.
I guess I really don't know what that means. I can imagine two different implementations of a sequence, each with an add method. One pushes data to the beginning of the list, the other to the end. By your suggestion, they should be reported as equal if they both currently contain the sequence 1, 2, 3. But then they lose the notion of substitutability, which is really what equality should be all about. For -- with some appropriately dispatching add function -- add(4, a); //=> (4, 1, 2, 3) and add(4, b); //=> (1, 2, 3, 4). I don't understand this idea of super-type equality.
@ivan-kleshnin:
[regarding a comment on StackOverflow]:
I hate when people make you think there is something wrong with you, where's it's definitely something wrong with their tool. He should tell something like:
Good question, pal. JS treats arrays just like objects (because
Arrayhas anObjectin prototype chain, you know) So your question should be really why{} != {}.And then explain it as an OOP heavy legacy. His answer was totally opposite.
Like that's a normal thing and fairly logical that two empty arrays are different.
I've read that comment, and the entire exchange on that question over and over, and try as I might, I can't see Felix King's answer as condescending.
The original question about testing two arrays for equality in a more efficient manner than through a JSON conversion. It sounds as though you would turn it into a debate on the design of the Javascript language choices. While such a debate could be interesting and fun, it certainly would be out of context as an answer to such a question, and is far off-topic for StackOverflow.
And as to your snide implication that it's crazy to imagine a situation where two empty arrays are considered different, while I have absolutely no data to back it up, I'd be willing to bet that in the history of programming languages, there have been more where such is possible than where it's impossible.
Maybe I've got Immutability and side-effect free functions are at the heart of its design philosophy declaration in the docs too literally
No, you can take that very literally. But you maybe missed the part that said "A practical functional library _for Javascript programmers._". I find it a bit odd that you initially told me that just doing compose in JavaScript was _freaky_. And now you want to be even more extreme and pretend that JavaScript doesn't have pointers and compare everything by value only. Don't get me wrong. Equality by value is important. But pointer equality is a part of JavaScript. And supporting equality across such
@CrossEye
I guess I really don't know what that means. I can imagine two different implementations of a sequence, each with an add method. One pushes data to the beginning of the list, the other to the end.
I think he is talking about different physical implementations of the same abstract data structures. For instance an ordered list is an abstract data structure. A physical implementation could be an array, a linked list or the kind of trees that Clojure uses. Providing value equality across such data structures is a solid feature IMO.
I would find simply replacing the first array with the second and not descending into it to be a bit more sane. And I guess this is a response to @paldepind, too, then.
I've been convinced that not considering the properties in the first array is sane. But we should surely descend into the second array and deep copy any arrays or objects it may contain. Right?
@ivan-kleshnin
I guess whether that seems useful is in the eyes of the beholder. But for anyone who is working with any mutable data, reference equality may well turn out to be important.
If you want Ramda to cover mutable data cases, then of course a lot of what I've said is inapplicable.
Maybe I've got Immutability and side-effect free functions are at the heart of its design philosophy declaration in the docs too _literally_ :smile:
Perhaps you do. Ramda is designed to help users work in a particular style. But it does not attempt to be proscriptive. While Ramda will never mutate your input data, it doesn't care if you do so, and it is meant to be flexible enough that nothing will break if you do. Ramda's own functions are all free of side-effects, but it certainly doesn't expect that all your functions will be.
Ramda wants to make a functional style easier in Javascript. But it's not trying to replace Javascript with a more functionally pure language. There are some great languages that do so, things like PureScript, ClojureScript, and Elm. But that's not Ramda's place.
So yes, we do want to cover the mutable case. People use Javascript to work with all sorts of mutable data, including DOM nodes. We certainly are not trying to write a library in which they have to create immutable wrappers for any such structures. Nor are we trying to insist that users work solely with FP techniques. We just want to be able to offer them useful FP functions for when they choose to do so.
@paldepind:
I would find simply replacing the first array with the second and not descending into it to be a bit more sane. And I guess this is a response to @paldepind, too, then.
I've been convinced that not considering the properties in the first array is sane. But we should surely descend into the second array and deep copy any arrays or objects it may contain. Right?
Probably. I guess we need to decide if any reference equality is acceptable, or if we simply need to clone in such cases.
@paldepind:
One other thing. I like your general approach, but I'm not sure how you mean to deal with this:
Sparse arrays are not handled.
Does this go away if you stop worrying about the first array? Or is it something that you still want to deal with for the second one? Obviously you _can_ detect sparse arrays: loop through the indices up to the array length and test idx in arr. But it seems a bit of overkill in general.
@ivan-kleshnin
That means Lodash treats Arrays like Objects and offsets like keys in this exact case. This is understandable only from the view of JS implementation. But is weird from all other point of views. Lodash just totally lost Array (the only ordered collection!) concept by this decision.
lodash iterate arrays as a list not as an object, meaning it ignores non-index properties in the merge.
@CrossEye
(Note especially the undefined first element. Even though '0' in b.tags; //=> false, its non-existent element still overrides the one from a.tags.)
I would find simply replacing the first array with the second and not descending into it to be a bit more sane.
lodash treats sparse arrays as dense in its methods just as Ramda does in its forEach and others.
lodash treats sparse arrays as dense in its methods just as Ramda does in its
forEachand others.
Yes, understood. lodash clearly has the most sophisticated merge around. I haven't actually dug into the implementation, but I have played with it a bit, and I'm very impressed, especially with how it handles complex cycle-detection. In the example I cited above, a = []; b = []; a.push(b); b.push(a), lodash handles it with aplomb, announcing a == b; //=> true, which seems like the best possible result.
But odd edge-cases like the merging of sparse arrays into others makes me suspect that for Ramda we might want a little less sophistication.
@CrossEye
I guess I really don't know what that means. I can imagine two different implementations of a sequence, each with an add method. One pushes data to the beginning of the list, the other to the end. By your suggestion, they should be reported as equal if they both currently contain the sequence 1, 2, 3. But then they lose the notion of substitutability, which is really what equality should be all about. For -- with some appropriately dispatching add function -- add(4, a); //=> (4, 1, 2, 3) and add(4, b); //=> (1, 2, 3, 4). I don't understand this idea of super-type equality.
That's a very keen observation. As I already mentioned in Clojure Vectors ("binary" trees) and Lists (linked lists) are treated as equals e.g.
(= `(1 2 3) [1 2 3]) ; true
There is a function conj which adds first element to List and last to Vector.
And there are a bunch of operations in Clojure which change input type (mostly they accept any sequence and always return List). So basically in Clojure situation where
x == y
fn(x) != fn(y)
is very much possible. I'm not a Clojure expert but I asked Clojurers how they deal with it and do they see more benefits or drawbacks with it. Basically, they say there are just a tiny set of functions where you need to be sure about exact datatype you deal with. There you do explicit type convertion.
Most of the times, though, it's just mapping / filtering and letting Clojure decide about datatype is rather cool. They mostly start with a Vector, pass data through all the layers letting it to be autoconverted to List and convert to Vector only where that's really required.
While I agree that sounds a bit scary, they ensure me it's quite unobtrusive and becomes second nature very fast.
@paldepind
I find it a bit odd that you initially told me that just doing compose in JavaScript was freaky.
I haven't said that about compose. It was more about avoiding this in the first place if you care.
I think he is talking about different physical implementations of the same abstract data structures. For instance an ordered list is an abstract data structure. A physical implementation could be an array, a linked list or the kind of trees that Clojure uses. Providing value equality across such data structures is a solid feature IMO.
Yes. Not only an equality. There should be common operators.
Why even newer JS Map has no filter / map / reduce method combo?
Isn't it just bad?
Equality by value is important. But pointer equality is a part of JavaScript.
Ok, let me be more precise. What I can't get is not that you have two comparison functions.
It's more about why not make eq (deep) and shallowEq
instead of eq (shallow) and eqDeep?
You make equality a real equality and provide shallowXxx for some possible cases (though I can't imagine others than performance micro-optimizations). You can argue that, in this way, we're hiding JS inner mechanics but I can counter-argue that there is already enough good libs built upon JS limitations and not-so-good design choices. I would be more happy with a lib that gently overcomes them.
@CrossEye
Ramda wants to make a functional style easier in Javascript. But it's not trying to replace Javascript with a more functionally pure language. There are some great languages that do so, things like PureScript, [ClojureScript][cs], and Elm. But that's not Ramda's place.
Fair deal. I mostly want to define the borders Ramda won't cross in our discussion.
Not to push it in any direction. And the best way to make people talk is to make them angry :wink:
@ivan-kleshnin
Ok, let me be more precise. What I can't get is not that you have two comparison functions.
It's more about why not makeeq(deep) andshallowEqinstead ofeq(shallow) andeqDeep?
It's an interesting suggestion. I can think of one major reason not to do this: Right now, add is a functional equivalent of the operator '+'; gt, of '>', and of '&&', and similarly too for subtract, multiply, divide, or, not, lt, modulo, and, so too, for now is eq a functional equivalent of '==='. I find it likely that more users would expect this similar behavior from eq than would expect a value-equality function.
I haven't said that about compose. It was more about avoiding
thisin the first place if you care.
Ok. But you expressed it right after I showed a simple example with compose.
Yes. Not only an equality. There should be common operators.
Of course. Many programming languages have this.
Ok, let me be more precise. What I can't get is not that you have two comparison functions.
It's more about why not make eq (deep) and shallowEq
instead of eq (shallow) and eqDeep?
Because there are two different types of equality in JavaScript. Pointer equality and value equality. And I agree with CrossEye that the current behavior is what people expect since it maps to the === operator.
@CrossEye
Probably. I guess we need to decide if any reference equality is acceptable, or if we simply need to clone in such cases.
I've changed my mind. If we're simply overriding arrays in the first object with arrays in the second object then just copying them over seems reasonable. In that case handling sparse arrays is trivial.
@paldepind
In that case handling sparse arrays is trivial.
If they're handled in one place, they should probably be handled in others.
If you handle them in one place, you should probably handle them in others.
A function that doesn't touch arrays but merely assigns them somewhere else obviously handles sparse arrays. R.always handles sparse arrays as well. But that's very different from functions that actually does something with the arrays.
But that's very different from functions that actually does something with the arrays.
I think, merging is doing something with the arrays. Something like handling sparse arrays is easy to document and handle in a blanket way. By saying there is essentially no sparse arrays (treating them as dense) it can avoid the baggage with them and punt on them in forEach, slice, filter, contains, etc.
I think, merging is doing something with the arrays.
Did you read the foregoing discussion? This is not the same as _.merge. The mentioned merge behavior is exactly _not_ doing touching the arrays. Unless you consider assigning an array to a property to be doing something with an array.
Ok. But you expressed it right after I showed a simple example with compose.
Yes. Because of cumulative effect and a chat limitations.
It's an interesting suggestion. I can think of one major reason not to do this: Right now, add is a functional equivalent of the operator '+'; gt, of '>', and of '&&', and similarly too for subtract, multiply, divide, or, not, lt, modulo, and, so too, for now is eq a functional equivalent of '==='. I find it likely that more users would expect this similar behavior from eq than would expect a value-equality function.
Ok, this is a strong counter-argument. I believe that correspondance deserves to be mentioned in docs.
Because there are two different types of equality in JavaScript. Pointer equality and value equality. And I agree with CrossEye that the current behavior is what people expect since it maps to the === operator.
And (at least) four are possible. Imagine two clocks. One counting clockwise and other – counterclockwise. There are 4 possible and valid questions about the clocks which may be called "equality questions".
First question is of highest abstraction level. You just need to look at the numbers. Second question is on lower level and so on.
What abstraction(s) to rely on for programming language? From practice we know that higher-level operators give us more productivity. In Clojure everything is fairly nice:
1. Do they show the same time right now? (==)
2. Do they share the same behavior? (values are immutable, so questions 2 and 3 are the same)
3. Do they have the same manufacturer and model? (type)
4. Are they actually one physical clock? (fallback to Java ops)
JS fails miserably to model this situation:
1. Do they show the same time right now? (nothing like that)
2. Do they share the same behavior? (== vs ===... not really)
3. Do they have the same manufacturer and model? (typeof vs instanceof... not really)
4. Are they actually one physical clock? (nothing like that)
@ivan-kleshnin
That analogy seems very arbitrary. There's a thousand ways two clocks can be "equal" (do they have the same color?). In a language that's not purely functional pointer equality is a thing. In a purely functional language only value equality makes sense. Your example only obscures that.
@paldepind
Did you read the foregoing discussion?
I've seen discussion over the behavior of assigning values of sparse arrays. What I'm saying is that if Ramda respects the holes in sparse arrays for merges then it should probably do the same for others methods like contains. For example, R.contains(undefined, Array(1)) currently returns true. If R.contains says there's an undefined value in the array why wouldn't it be merged?
There's also baggage in older environments to consider. For example, IE < 9 will treat undefined values in array literals as holes (e.g. 0 in [undefined] is false) which isn't the case in others.
That analogy seems very arbitrary. There's a thousand ways two clocks can be "equal" (do they have the same color?).
I don't think so. They are not just _questions_ but types of questions. You may change exact sentences staying withing the same structure.
"color" is either an implicit characteristic if values are truly immutable (same as questions 2 and 3) or explicit if values are mutable (same as question 2).
In a purely functional language only value equality makes sense.
I demonstrated that at least four types of valid questions can be applied to any language.
It's just JS fails to follow this.
Take Python. Everything fits again, just higher level is absend:
1. Do they show the same time right now? (nothing like that)
2. Do they share the same behavior? (==)
3. Do they have the same manufacturer and model? (isinstance, type)
4. Are they actually one physical clock? (id)
I demonstrated that at least four types of valid questions can be applied to _any language_.
Just because you can ask the questions doesn't mean they make any sense to ask in any given language.
1/ That is value equality. That probably applies to all languages yes. 2/ Values don't have behavior and == certainly don't test for behavior. Duck typing only applies to objects. So it does not make sense in any language. 3/ Not all languages have a concept of type. 4/ In a purely functional languages you don't have pointer equality and thus your 4. type doesn't apply.
@jdalton:
Did you read the foregoing discussion?
I've seen discussion over the behavior of assigning values of sparse arrays. What I'm saying is that if Ramda respects the holes in sparse arrays for merges then it should probably do the same for others methods...
What @paldepind probably meant was, "Did you notice that we were discussing not merging arrays, sparse or not, in this process?" The suggestion was to simply overwrite them with the new one.
Ramda's (totally unenforceable) point of view is mostly that we work with _lists_. Since there is no actual list type in JS, we lean on dense arrays. While we don't actually test to see that an array you pass us is dense, all bets are off if it's not. You've pried off the cover and voided the warranty. In this case, a property of an object to be merged, there's less of an issue, especially now that we seem to be leaning toward not iterating its properties.
1/ That is value equality. That probably applies to all languages yes
That is data equality in the sense of business look at the data.
2/ Values don't have behavior and == certainly don't test for behavior.
Properties are at the same logical level as Behaviors.
What is more fundamental – is an open question.
3/ Not all languages have a concept of type.
And what does it change?
Just because you can ask the questions doesn't mean they make any sense to ask in any given language. 1/ That is value equality. That probably applies to all languages yes. 2/ Values don't have behavior and == certainly don't test for behavior. Duck typing only applies to objects. So it does not make sense in any language. 3/ Not all languages have a concept of type. 4/ In a purely functional languages you don't have pointer equality and thus your 4. type doesn't apply.
Ok, I surrender. You're trying to look at the reality inside-out, through programming language experience. I proposed to do the opposite – see on language ouside-in e.g. _from_ reality and real-world abstractions to our PL primitives. Probably it will take too much words.
@CrossEye
What @paldepind probably meant was, "Did you notice that we were discussing not merging arrays, sparse or not, in this process?" The suggestion was to simply overwrite them with the new one.
Ah, I missed that in the walls of text.
Ramda's (totally unenforceable) point of view is mostly that we work with lists. Since there is no actual list type in JS, we lean on dense arrays. While we don't actually test to see that an array you pass us is dense, all bets are off if it's not.
I've found that sparse array issues pop up at a higher rate than object iteration order issues. I donno, maybe that will help weight it in terms of something Ramda is concerned with. If you were to tackle them you can look towards ES6 which is treating them as dense too, e.g. 0 in Array.from(Array(1)) is true.
In this case, a property of an object to be merged, there's less of an issue, especially now that we seem to be leaning toward not iterating its properties.
If assignment is the goal of the method maybe naming it something closer to the lang, like assignDeep, is a better route :bike: :house:.
If assignment is the goal of the method maybe naming it something closer to the lang, like assignDeep, is a better route :bike: :house:.
Assign is somewhat misplaced in functional library I think. It merges properties from two objects into one. I think the name makes sense.
@jdalton:
I've found that sparse array issues pop up at a higher rate than object iteration order issues.
I'm guessing you're speaking mostly in the context of lodash here, correct? If so, then I think Ramda has an advantage in that it's not trying to be as general-purpose as lodash is. If a use-case strays too far from our expected ones, we feel more comfortable simply saying, "Then don't do that." We've simply not found any reason to work with sparse arrays, which do not map well to lists. In the context of something like merge, it's a little different, since we're not working with the object _as a list_ but only as a collection of properties. (It doesn't fit into the standard list-processing model of, "Process the first, then process the rest.") In this case, we _could_ decide to do something different. But I haven't seen anything convincing. lodash's version is extremely impressive overall, but the handling of sparse arrays here still feels off. And there's nothing else close. That's why I'm more than happy to punt on it. If @ivan-kleshnin's reports above are accurate, many other languages also choose this route.
If you were to tackle them you can look towards ES6 which is treating them as dense too, e.g.
0 in Array.from(Array(1))istrue.
I think that's a bit of an exaggeration. It's simply that Array.from is based on iterators. There is nothing that removes sparse arrays or automatically assigns the indices to them. Iterators will act as though there are undefined values there, just as an ES3/5 for loop that simply addresses arr[idx] for an unassigned indices will act as though there are undefined values there.
In general, I feel no reason to consider tackling them. I don't know if others feel any differently.
If assignment is the goal of the method maybe naming it something closer to the lang, like assignDeep, is a better route :bike: :house:.
Assign is somewhat misplaced in functional library I think. It merges properties from two objects into one. I think the name makes sense.
And I really wish we'd just stuck with mixin back when... Anyone for extend?
For the sake of completeness I will mentioned another Lodash design flaw that tripped me up today.
It's related to what was discussed, so I just leave it there for history.
Lodash does not see the difference between missing key and value == undefined and does not treat undefined the same as other values.
import Ramda from "ramda";
import Lodash from "lodash";
console.log(
Ramda.merge({}, {name: undefined}) // {name: undefined} -- expected
Ramda.merge({name: "foo"}, {name: undefined}) // {name: undefined} -- expected
);
console.log(
Lodash.merge({}, {name: undefined}) // {} -- why?!
Lodash.merge({name: "foo"}, {name: undefined}) // {name: "foo"} -- why?!
);
Again, we can speculate about _design decisions_ and _opinionated, does not mean broken_ but for me it's getting clear that Lodash does not follow established conventions about Arrays and Objects which means my personal farewell to this lib.
Lodash.merge({}, {name: undefined}) // {} -- why?! Lodash.merge({name: "foo"}, {name: undefined}) // {name: "foo"} -- why?!
By default lodash skips undefined values of objects much like a defaults would.
Lodash does not see the difference between missing key and value == undefined and does not treat undefined the same as other values.
File a bug if you think it's an issue. Lodash treats arrays and objects differently and with a stance of treating arrays as dense and producing dense arrays.
Lodash does not follow established conventions about Arrays and Objects which means my personal farewell to this lib.
You could always customize the merge behavior to taste with a customizer callback.
You could always customize the merge behavior to taste with a customizer callback
Not in this case :disappointed: If you return undefined from callback default behavior occurs:
import Lodash from "lodash";
Lodash.merge({name: "xxx"}, {name: undefined}, function(a, b) {
return b;
}); // {name: "xxx"}
This aspect is done right in deep-merge by @Raynos, though it's default behavior is the same...
import DeepMerge from "deep-merge";
let merge = DeepMerge(function mergeStrategy(a, b, key) {
return b;
});
merge({name: "xxx"}, {name: undefined}, function(a, b) {
return b;
}); // {name: undefined}
Not in this case :disappointed: If you return undefined from callback default behavior occurs
Ah, yap :P
Btw, patched 882d84f1. So
_.merge([1], [undefined]);
// => [1]
Besides aligning with defaults and deep-merge the merge behavior of undefined aligns with jQuery:
$.extend(true, [1], [undefined])
// => [1]
$.extend(true, { 'a': 1 }, { 'a': undefined })
// => {a: 1}
This aspect is done right in deep-merge, though it's default behavior is the same.
I like having the option of being able to defer to the default behavior in a customizer via undefined. It also allows working with cyclic structures too. There's a balance for sure.
I'm satisfied with lodash's merge behavior. It goes above and beyond in terms of environment support and trying to accommodate developers varying opinions.
@ivan-kleshnin:
While this is a fine place to describe differences between Ramda and lodash (or any other library), it's probably an inappropriate place to take on a sustained critique of any library except for Ramda.
I actually have quite mixed feelings about the behavior in this specific case, and I think it's because Javascript itself is fairly schizophrenic about the meaning of undefined. While it is a value (otherwise the language have to throw an error on accessing it), it is specifically the value that declares that "no one has defined a value here"; it's more severe than null, which that says "hey, there's nothing here _yet_". So I'm not at all certain that there should be any real difference in handling of {foo: 1} and {foo: 1, bar: undefined}.
Several people in a FantasyLand discussion may have convinced me of the need to deal with any value, however obscure, as a value in its own right, but something still nags at me in this case.
So I'm not at all certain that there should be any real difference in handling of
{foo: 1}and{foo: 1, bar: undefined}.
But there if a very real difference:
'bar' in {foo: 1}
> false
'bar' in {foo: 1, bar: undefined}
> true
Nice example, @paldepind! Two objects must have the same keys in order to be considered equal.
So I'm not at all certain that there should be any real difference in handling of
{foo: 1}and{foo: 1, bar: undefined}.But there if a very real difference:
I recognize that they're distinguishable. The point was whether we decide to distinguish them. As I said, some discussion (that I'm too lazy to track down) on FantasyLand demonstrated that there are times when we cannot avoid choosing to use undefined and/or null as values no different from any other. But especially for undefined, I think there are also good arguments against it at least in some cases.
Objects in JavaScript serve two roles: they are sometimes used as faux dictionaries and other times are used as faux records.
When an object is used as a record, my mental model is as follows:
I'm not arguing that this is the _right_ way to think about record-like objects. :)
The point was whether we decide to distinguish them.
How do you think I discover those issues in Lodash? By inspecting their code with microscope or by facing real troubles with my business-logic layer :smile:? I'm not a language purist, it's just that Lodash non-standard behaviour causes a lot of practical problems. And I examplify it as a proof that smart people who gradually created best practices in other languages were right all along.
For example, my case was resetting of filters. If you treat undefined values as missing keys you just can't reset the filters by setting key value to undefined via merging. This means you need to create another logic branch especially for reset. Pain.
I'm not arguing that this is the right way to think about record-like objects. :)
I would like to point out that in a way this is also how the JavaScript VM thinks of objects. If a group of objects has the same keys defined in the same order they will share the same hidden class. Basically, if you use objects as record the optimizer will turn it into something a bit like a C struct. Again, here it is the key and the order of the keys that is most significant.
There seem to be three defensible possible outputs for this:
merge({foo: 1, bar: 2}, {bar: undefined});
{foo: 1, bar: 2}, the current lodash result{foo: 1, bar: undefined}, the current Ramda result{foo: 1}, not seen in the wild but certainly arguableNone seems to me to have an overwhelming advantage. While I do respect David's understanding of objects as records, Ramda has no particular reason to choose that interpretation of the objects passed to it over a dictionary one.
I'm not really suggesting any changes here. I like what Simon has done with set and don't see any real reason to revisit these things. But let's not pretend that we've made the single obvious choice in these matters.
I don't get it. If an object has key that is undefined then that undefined has _deliberately_ been put there. The programmer has actively taken the decision to set it to undefined. If you don't want a key you delete it.
How can merge({foo: 1, bar: 2}, {bar: undefined}) turn into {foo: 1}? I have a named key that I've deliberately set to undefined and the library just throws it away? By that logic it seems like this would be expectable behavior as well:
R.keys({foo: 1, bar: undefined}); //=> ['foo']
And then you're clearly disagreeing with JavaScript.
And then you're clearly disagreeing with JavaScript.
You say that like it's a bad thing! I think what @CrossEye is driving at is, what does this _mean_: {foo: undefined} and apart from the for .. in test, how is that different from {}? It's not at all clear that the _meaning_ is different.
You say that like it's a bad thing! I think what @CrossEye is driving at is, what does this mean: {foo: undefined} and apart from the for .. in test, how is that different from {}? It's not at all clear that the meaning is different.
Guys you're scaring me out of hell. If such pro guys says such nonsense so easily...
It means the future of JS is even darker than I was afraid.
By treating undefined values in a _special way_ you do very similar thing to what Brendan Eich did with null addition: create entirely new edge cases, create unnecessary complexity out of thin air. I'm pretty shocked that someone does not see the difference between missing keys and guard values. Or never met cases where it ruined data-flow... I'm really scared.
:ghost:
@ivan-kleshnin for the record, I am _not_ advocating this position. I am simply suggesting -- since @CrossEye enumerated three candidate solutions and @paldepind objected to one of them -- simply suggesting that _one could start arguing from there_. See the difference?
See the difference?
Yes. I dont want to start a new rant.
closed by #1088
The point is that a user may not have put the value there intentionally. This is what the language does, and what therefore Ramda does too in some functions, to deal with cases of "Well, I don't have the thing you're looking for, but I need to put a value here."
It is that sense of the language that makes undefined trickier than it might seem.
var overrides = {foo: 1, baar: 2, baz: 3. corge: 4};
var important = ['foo', 'bar', 'baz'];
var current = {foo: 10, bar: 20, qux: 30};
var final = R.merge(current, R.pickAll(important, overrides));
Here final in Ramda is {foo: 1, bar: undefined, baz: 3, qux: 30}. In lodash, it would presumably be {foo: 1, bar: 20, baz: 3, qux: 30}. And here it's arguable that such would be the preferred choice. I wanted the values in current, overridden by those specific values of overrides listed by name in important. The whole undefined thing becomes in this scenario simply an unfortunate accident. The delete-on-undefined possiblity is certainly less likely, but is clearly supportable, as it is related to the design of the language.
Of course if there was no typo in overrides this would be {foo: 1, bar: 2, baz: 3, qux: 30}.
Still, this is all academic. We have to make a choice, and I think we've made ours for Ramda. I'm not unhappy with it. I just don't like pretending that it was a clear, unambiguous choice.
@CrossEye I actually think that the current result from Ramda is the desired one.
The point is that a user may not have put the value there intentionally.
Using R.pickAll instead of R.pick is in my opinion intentionally saying "I want keys with undefined values to appear". Getting undefined keys is the whole point of R.pickAll! With the other merge behavior this:
var final = R.merge(current, R.pick(important, overrides));
Would be exactly the same as this (pick replaced with pickAll):
var final = R.merge(current, R.pickAll(important, overrides));
Which seems bad because merge does not respect the users choice of using pickAll and then there's suddenly no way to get the original behavior.
My mental model of merge is something like "create new object, copy keys from first object to new object and then copy keys from second object to new object". Nothing in that mental model considers the actual values at the keys. If you write a merge that does consider the values then your adding a special case and thus undeniably creating a more complex merge. This would IMO also be a harder to understand merge.
The point is that a user may not have put the value there intentionally
If you could show an example of that I might actually be convinced. But I'd still say that undefined keys should be respected because the user has intentionally written {foo: undefined} instead of {}, or written obj.foo = undefined instead of delete obj.foo or used pickAll instead of pick.
I understand that this theoretical and not a discussion about how Ramda should actually implement things. I think this is an interesting discussion but I would also be understanding if you do not wish to discuss this further.
The point is that a user may not have put the value there intentionally.
Using R.pickAll instead of R.pick is in my opinion intentionally saying "I want keys with undefined values to appear". Getting undefined keys is the whole point of R.pickAll!
Honestly, I wrote the damn function over, before realizing that I was duplicating a built-in Ramda function. I coded something like this:
var choose = function(names, obj) {
return R.fromPairs(R.zip(names, R.map(R.prop(R.__, obj), names)));
};
and started writing up my previous message with this in place when I realized that it was equivalent to R.pickAll. I didn't think about the fact that the actual function was intended to specifically deal with undefined. But the point is that one can get undefined from the language in all sorts of ways.
I agree that this is an interesting discussion, even if it doesn't affect how we proceed with Ramda for now. I think the trouble is that how JavaScript works, and what it constructs mean here simply don't play well with standard practice from the FP world. A library trying to bridge these two worlds has to occasionally deal with such issues.
Ok. Then your example makes more sense. But I could still argue that the choose function is explicitly written in a way that makes these undefined keys appear and thus they must have meaning. And if those undefined keys where not intended then the problem is in the definition of choose.
Here is an attempt at a counter example. Consider data like this (from code I wrote earlier today):
var model = {selected: undefined, items: [/* some items */]};
Here model.selected is an index into model.items when an element is selected and undefined when an element is not selected. Clearly the value undefined here has as much meaning as if model.selected had been a number. So if a library in some cases treated the undefined as a not being an actual value and threw it away in favor of something else then the correctness of my data would not be preserved.
Here
model.selectedis an index intomodel.itemswhen an element is selected andundefinedwhen an element is not selected.
To me, that smacks of abuse of the language. null would be appropriate here. For without some obnoxious gymnastics (if ('selected' in model && ...) ) there is no way to distinguish the case where the user intentionally put an undefined value on the model and one where the model was accidentally left incomplete, or where it had a typo:
var model = {selceted: 4, items: [/* some items */]}; // TYPO
model.selected; //=> `undefined`
So I guess that's part of the problem. I feel that using undefined for anything other than "hey, no one ever told me about such a property" is abusing the semantics of the language. But I also feel that from the FP side, ever treating anything that can be assigned as a value as uniquely privileged is extremely problematic. So I end up in a quandry.
Ok. That's probably a big part of the reason why we have different opinions of this. I never use null and I always use undefined in cases like described above. I don't know is that's abusing the semantics of the language. I've also programmed in JS that way and I've never heard that is should be "bad".
I never use null and I always use undefined in cases like described above.
Me too. null should be removed from the language (by making it strict equal to undefined and gradual deprecation). If I'm not wrong the only place where you can meet null besides user libraries is DOM. Native JS never returns it. And DOM will be replaced with VDOM as we all hope. So the opportunity is real.
I don't know is that's abusing the semantics of the language. I've also programmed in JS that way and I've never heard that is should be "bad".
I believe that putting some semathics in null vs undefined weird difference is real abuse.
I guess my main reason for not using undefined as a value this way is that it does not allow me to distinguish between accidentally skipping a property (or having a typo) and intentionally using the undefined value for it, except with the very awkward prop in object, which I rarely see people use.
So it would not exactly be abuse in my mind if the user does choose to use in this way, but it also seems odd to me when the language has another perfectly good value in null which doesn't share this problem.
I believe that putting some semathics in null vs undefined weird difference is real abuse.
ISTM that the language already does so. This would simply be a matter of using the language the way it was designed. I'm not claiming that this was one of the best-designed parts of JS, but it is what we have; I've seen no great reason to work against the grain on it.
So it would not exactly be abuse in my mind if the user does choose to use in this way, but it also seems odd to me when the language has another perfectly good value in null which doesn't share this problem.
It seems to me like the design of Ramda itself disagrees with you.
R.head([]); // => `undefined`
R.find(R.lt(20), [1,2,3]) // => `undefined`
R.nth(2, ['only me']); // => `undefined`
If you assign the returned value of any of these functions to an object you have the problem you're talking about.
It seems to me like the design of Ramda itself disagrees with you.
R.head([]); // => `undefined` R.find(R.lt(20), [1,2,3]) // => `undefined` R.nth(2, ['only me']); // => `undefined`If you assign the returned value of any of these functions to an object you have the problem you're talking about.
Interesting. I find that at least the first and third examples absolutely support what I'm talking about. The second one is more questionable, as Ramda has more of a choice in deciding what to report for the result of a find when the value is not there, but undefined still seems quite reasonable. The point is "_if you assign_" this to an object. Ramda does not provide that level of error-checking for you. A library like Sancutary is a great companion for things like this.
The problem is when you make the request, "get me such-and-such", a language will return the value to you, unless such-and-such doesn't exist. Then the language can do one of several things. It can generate an exception. (PHP does this, I believe.) Or it can return a token to say "hey that wasn't here." In LISP that token also serves as the empty list, which is incredibly convenient. In some languages, there is a single (e.g. null) token, which then is often is mostly useless as a value in its own right, but which can be assigned to a reference. Javascript has two, with somewhat different meanings. null means that the reference you were looking for (say obj.foo) has been defined but not assigned to. undefined means that the reference you were looking for has not been defined. (obj has no foo property.)
Obviously one can choose to ignore this distinction and treat null and undefined as essentially identical. Or one can go the Sanctuary route and add error-checking so that no null/undefined references propagate. Ramda has mostly chosen to stick closer to the core of what Javascript does. I don't feel that this is set in stone, but it's been long-standing in the library and it would take some serious persuasion to change it, I believe.
Javascript has two, with somewhat different meanings. null means that the reference you were looking for (say obj.foo) has been defined but not assigned to. undefined means that the reference you were looking for has not been defined. (obj has no foo property.)
At this point I better unbsubscribe.
At this point I better unbsubscribe.
Up to you, of course. I know you're not happy with Ramda living with some of the quirks of the language. But that's how it's always worked. It is a Javascript library, and not an attempt to build Clojure, Haskell, Erlang, or some other more functional language inside JS.
nullmeans that the reference you were looking for (sayobj.foo) has been defined but not assigned to.undefinedmeans that the reference you were looking for has not been defined. (objhas nofooproperty.)
I have not heard of that distinction before. But what if I put a number in obj.foo? Then there is no reference since it's a primitive. If you treat null like a null pointer then you can only use it in places where a pointer, i.e. an object, is expected.
My point with my last post was that pretty much everything returns undefined as a value meaning "nothing to see here" (and then there are the odd cases of -1 and null). Thus, if I wanted to use null in objects for meaning "nothing to see" here I'd have to do a lot of converting myself.
Considering my example again:
var model = {selected: undefined, items: [/* some items */]};
If I where doing what you suggest and used null instead of undefined to represent "nothing is selected". Then I, for instance, couldn't do this
model.selected = R.head(model.items); // select first item if any
I'd instead have to do this
model.selected = R.head(model.items) === undefined ? null : R.head(model.items);
That's really annoying and I'd had to write even more if a wanted to avoid two invocations of R.head.
FWIW (I did not come close to reading this whole thread) I routinely use and see used null and avoid undefined knowing that JavaScript semantically reserves by convention undefined to represent nothingness at the language level whereas APIs by convention represent nothingness with null.
I have not heard of that distinction before
@paldepind I've recently found some background to supply @CrossEye opinion.
JSON.stringify({x: null}); // '{"x":null}'
JSON.stringify({x: undefined}); // '{}'
This is very dangerous and wonky behavior of language, but that's how it is...
My opinion about merge didn't change, though.
FWIW (I did not come close to reading this whole thread) I routinely use and see used null and avoid undefined knowing that JavaScript semantically reserves by convention undefined to represent nothingness at the language level whereas APIs by convention represent nothingness with null.
As you probably know undefined was meant to represent nothingness for values (4, "44"...) while null was meant to represent nothingness for objects (Object, Array). Probably one of the worst ideas ever, given the fact that values and objects are so weakly separated in JS.
Speaking of API, I met both null and undefined used there with frequency
about 60% for null and 40% for undefined...
@ivan-kleshnin
This is very dangerous and wonky behavior of language, but that's how it is...
I think it has more to do with the fact that JSON has null but not undefined.
@paldepind they could convert undefined to null like they do with NaN.
JSON.stringify({x: NaN}); // '{x: null}'
So undefined behaves not like any other value in JS, including null.
I've recently found some background to supply @CrossEye opinion.
JSON.stringify({x: null}); // '{"x":null}' JSON.stringify({x: undefined}); // '{}'This is very dangerous and wonky behavior of language, but that's how it is...
While this does offer some minor support to the position I've taken, the main reason for this is that JSON was explicitly designed to be extremely easy to port to almost any languages. Most languages have some notion of null (regardless of whether they're now seen as the billion dollar mistake) and Javascript's null was the clear map for that notion. There's simply no portable idea for undefined.
The idea behind undefined can be expressed as "it's possible to use undefined variables, their value is... undefined". Most languages don't allow that.
In JS undefined value exists and does not at the same time. A Schrödinger's cat.
> Object.keys({}).length == Object.keys({x: undefined}).length // exists
false
> JSON.stringify({}) == JSON.stringify({x: undefined}) // does not
true
null is a "billion dollar mistake", but undefined is something beyond that.
Most languages don't allow that.
For example:
❯ ghci
GHCi, version 7.10.2: http://www.haskell.org/ghc/ :? for help
Prelude> [1,2,3,undefined]
[1,2,3,*** Exception: Prelude.undefined
The idea behind
undefinedcan be expressed as "it's possible to use undefined variables, their value is... undefined"
Maybe it amounts to the same thing, but I would express it very differently: "Rather than throw when you try to access a value that hasn't been set, the language returns the signal value undefined." There are two somewhat palatable alternatives to this. First, the language could offer an operator like PHP's isset and then require anyone accessing a possibly undefined value to check this or accept the risk of a thrown exception. Second, the language could do something akin to what Sanctuary does, creating a wrapper around potential missing value, returning the appropriate wrapped or signal (Nothing) value for further processing. Neither of these alternatives thrills me. The only other widespread possibility is the single null value that causes such a problem in Java and C#. Maybe it's better, but I'm certainly not convinced.
Most languages don't allow that.
For example:
[ ... Haskell code ...]
There is, however, a big difference between how Haskell and JS _use_ their undefined value. The one in JS is the standard means in the language of simply reporting, "Hey, that thing you're looking for? It ain't there!" Haskell generally uses Nothing for that.
The one in JS is the standard means in the language of simply reporting, "Hey, that thing you're looking for? It ain't there!" Haskell generally uses Nothing for that.
@CrossEye I think I disagree with your definition.
undefined is a language-level concept of non-existence, e.g.:
> typeof a
'undefined'
> ({}).a
undefined
> ((a) => console.log(a))()
undefined
> ((a = 1) => console.log(a))()
1
> ((a = 1) => console.log(a))(undefined)
1
Whereas null is a user-level concept of non-existence:
> ((a = 1) => console.log(a))(null) // Default **does not apply**
null
null in JavaScript is Nothing in Haskell, and so on.
Most helpful comment
... which, by itself, does not make this an invalid idea.
I agree with you here. At least from Ramda's point of view, where we mostly want to think of Arrays as lists, this behavior is fairly odd:
(Note especially the
undefinedfirst element. Even though'0' in b.tags; //=> false, its non-existent element still overrides the one froma.tags.)I would find simply replacing the first array with the second and not descending into it to be a bit more sane. And I guess this is a response to @paldepind, too, then.