In Clojure one can use update-in to update value inside a deeply nested data structure.
So I was wondering whether Rambda could include something like it? (Or does it already?)
For example using it it would be easy to merge some new properties to a deeply nested object in immutable fashion:
var coll = {foo: {bar: {a: 1}};
R.updatePath(["foo", "bar"], R.merge({b: 2}), coll);
// -> {foo: {bar: {a: 1, b: 2}}
Will assocPath do?
Nope. Afaik it can be only used to set the value. You have to type the path twice to achieve the same:
R.assocPath(["foo", "bar"], R.merge(coll.foo.bar, {b: 2}), coll)
Or just R.assocPath(["foo", "bar"], 2, coll)
Well, that's the solution for the specific example case. There are quite a lot more use cases for this.
Ah you mean like update adjust but for nested property paths?
this should work:
R.assocPath(['foo', 'bar', 'b'], 2, coll); //=> {foo: {bar: {a: 1, b: 2}}}
Please read the Clojure documentation I've linked. Also the icepick module implements this:
https://github.com/aearly/icepick#updateincollection-path-callback
So in other words it's just like R.assocPath but instead of the value you pass in a function and the existing value (if any) is passed to the function and the return value of the function is set as the new value.
@TheLudd Oh, If you mean adjust then yes!
I did, correction made.
evolve is close, although it uses similar structures rather than separated paths
R.evolve({foo: {bar: R.merge({b: 2})}}, coll);
//=> {foo: {bar: {a: 1, b: 2}}
This makes me think... for each prop function there could be an analogous path function and an index function. We have adjust but not adjustProp or adjustPath right? Is there some way to solve this so that there would be no gaps between indexes, paths and props?
The implementation would be really simple in terms of assocPath and path:
var updatePath = R.curry(function updatePath(path, transform, coll) {
return R.assocPath(path, transform(R.path(path, coll)), coll);
});
Really liking Ramda :)
We would certainly welcome a PR. There's been some churn in this area recently, so there's no guarantee that it would be accepted. And the fact that this is a simplified version of evolve might work against it. But it is simpler, and that's also an advantage. The parallels discussed earlier with prop and path are fairly compelling.
Lenses (revamped in #1205) solve exactly this problem. Lenses are a general, composable mechanism for accessing a particular portion of a data structure (the focus), "setting" the value of the focus, or applying a transformation to the focus. A lens specifies the path to its focus in both directions: how to access the focus, and how to recreate the entire data structure after setting or transforming the focus.
var coll = {foo: {bar: {a: 1}}};
var fooLens = R.lensProp('foo');
var barLens = R.lensProp('bar');
var foo = {bar: R.compose(fooLens, barLens)};
R.view(foo.bar, coll);
// => {"a": 1}
R.set(foo.bar, [1, 2, 3], coll);
// => {"foo": {"bar": [1, 2, 3]}}
R.over(foo.bar, R.merge({b: 2}), coll);
// => {"foo": {"bar": {"a": 1, "b": 2}}}
Lenses obviate the need for the following functions:
R.assocR.assocPathR.dissocR.dissocPathR.adjustR.updateR.pathR.pathEqR.propR.nthR.evolveMany of these are useful shorthands, of course, but I'm not sure there're all warranted. ;)
Many of these are useful shorthands, of course, but I'm not sure there're all warranted. ;)
Perhaps not, but there is certainly a matter of expressivity to consider. This:
var fooLens = R.lensProp('foo');
var barLens = R.lensProp('bar');
var foobar: R.compose(fooLens, barLens);
R.over(foobar, R.merge({b: 2}), coll);
// => {"foo": {"bar": {"a": 1, "b": 2}}}
would be fine if we are going to reuse fooLens, barLens, or foobar. But if we aren't, then this:
R.over(R.compose(R.lensProp('foo'), R.lensProp('bar')), R.merge({b: 2}), coll);
seems significantly less expressive than this:
R.evolve({foo: {bar: R.merge({b: 2})}}, coll);
There's a trade-off: if we have a dozen or more specialized functions, one is bound to be more expressive than the lens equivalent, at the cost of having to remember the differences between a number of similar functions.
There's a trade-off
Yes, and that's all I meant. If we consider removing some or all of these, we'll have to consider these trade-offs carefully.
the fact that this is a simplified version of
evolvemight work against it.
One argument for it would be variable usage in the path:
var key1 = "foo";
var key2 = "bar";
updateIn([key1, key2], R.merge({b:2}), coll);
Doing that with evolve would be extremely cumbersome because you have to manually build each object and set the key variables to it var o = {}; o[key1] = {.... Hyrr. Although computed property names in ES6 would help a lot.
We would certainly welcome a PR. There's been some churn in this area recently, so there's no guarantee that it would be accepted.
I guess I can create one but it'll take couple of weeks since my summer vacation is just starting.
Lenses seem really cool, but to me they look really overkill for this. As said they are verbose for one of updates and frankly – as a Ramda newbie it took me quite a while to figure out what was even going on.
Also I wanna point out that basically assocPath is just a special case of the proposed function.
updatePath(["foo", "bar", "b"], R.always(2), coll);
Where assocPath can only set the values the proposed one can merge objects, increment numbers, append to arrays etc.
I'd like to point out that the current lenses in Ramda are not as compositional as one might naively assume. For this reason I recently developed a small library of Ramda compatible partial lenses that compose. See here. The library is still work in progress, but I'm already using it for production work.
Yes, you've hit on one of the primary restrictions of traditional lenses. I believe what you're creating is another type, traditionally called a Prism. Lenses deal only with a single occurrence of the focused value, Prisms deal with 0 - n occurrences.
One of the reasons that lenses have recently been split off into their own sub-project is to add types such as Prisms and Isos.
Lenses and partial lenses are related to prisms. However, I'm not convinced that partial lenses are prisms or that prisms would be preferred. Partial lenses, as I've constructed them, allow one to directly view optional (or undefined) data, insert new data, update existing data, and delete existing data. They also compose robustly when dealing with JSON data.
Here is an introduction to prisms that gives a simple type signature for lenses as
type Lens<'a,'b> =
('a -> 'b) * ('b -> 'a -> 'a)
and for prisms as:
type Prism<'a,'b> =
('a -> 'b option) * ('b -> 'a -> 'a)
Now, I'm not _intimately_ familiar with the Haskell lens library, and it is quite possible that the overly general type signatures used in that library allow non-obvious uses, but the above signature for prisms is decidedly different from the type signature that a partial lens would have. Specifically, the type signature for partial lenses would be:
type PartialLens<'a,'b> =
('a option -> 'b option) * ('b option -> 'a option -> 'a option)
which can also be expressed simply as:
type PartialLens<'a,'b> = Lens<'a option, 'b option>
which also explains the term _partial_ lens. (Edit: I fixed the types above to reflect better understanding.)
The option in the setter is what makes it possible to _insert_ and _delete_ data (rather than just view optional data or update data in case it exists in the original data structure) using partial lenses.
We are using partial lenses in implementations of UIs that essentially manipulate JSON data structures and so far this has seemed to work out to a very useful abstraction. Basically, following the schema of the JSON, we just define a single partial lens per JSON property or element. The partial lenses can then be used to view, insert, update, and delete properties or elements of (non-trivial) JSON objects. Whether a particular capability (e.g. to delete a particular property) is exposed to the user is not (necessarily) restricted by the partial lens definition for the property but some other part of the code.
I don't know prisms well enough to comment. My understanding of them from a distance is that they do exactly what your partial lenses do, but it is a fuzzy understanding, and from quite a distance. I look forward to fiddling with your implementation.
You might want to connect up with @DrBoolean and @scott-christopher who have big plans for the new Ramda-lens repo.
@CrossEye BTW, I bumped into Monocle recently. See More abstractions and Beyond Scala Lenses.
It would seem that the concept of partial lenses precisely corresponds to the "Optionals" in Monocle. OTOH, it would also seem that the concept is missing from the Haskell lens library.
I didn't know about Monocle when I started working on partial lenses. I did notice a few other similar, but not exactly the same ideas in various places. For example this blog post. As you can see, it is not symmetric (set operation is not using option/maybe). The way I came up with the idea of partial lenses is that I wanted a way to give subcomponents the ability to not only update but also to insert new and delete/remove items and used undefined to denote missing/to-be-removed data (or as the Nothing or None of the Maybe or Option type). From there on I simply tried to ensure that all the operations I added to partial lenses worked consistently and "symmetrically".
IMO, it doesn't seem to be extremely useful to distinguish the lens and prism concepts (from partial lenses or optionals) in a dynamically typed language such as JavaScript.
@polytypic: This is very interesting stuff. I'm still feeling my way around with this stuff. @davidchambers, @scott-christopher, @DrBoolean: Any thoughts?
Looking into this in more detail, I realized that I was probably too hasty and wrong above: Monocle's optionals do not seem to be precisely the same as what I call partial lenses. Neither do partial lenses seem to be precisely the same a prisms.
In the Monocle library it is perhaps lenses that most closely correspond to partial lenses. However, partial lenses differ from lenses in that the inputs and outputs of partial lenses are always optional aka partial. It seems that what I call partial lenses can then roughly subsume isos, lenses, prisms and optionals. For example, the equivalent of the Void optional as described here is called nothing in partial lenses and it is not at all useless, because it means that we can see partial lenses with choice as a monoid.
Partial lenses seem to be strongly related to the maybe monad, which is a monad with plus, but I do not yet fully understand the connection. A partial lens can basically perform an arbitrarily complex search during the lens execution. One can obtain a kind of bind operation by using partial lens compose and choose. See: https://github.com/calmm-js/partial.lenses/pull/21/files
I'll have to take a closer look at partial.lenses, but from a quick glance it looks interesting.
In the Haskell lens library you can achieve a similar result by composing at along with non.
at is for targeting map-like things and takes a key to produce a Lens that will focus on a Just value if the key exists, otherwise Nothing.
class Ixed m => At m where
at :: Index m -> Lens' m (Maybe (IxValue m))
non takes a default value and produces an Iso that allows converting between a Maybe a and an a, where the default value is used to convert from a Nothing. Reversing the Iso will produce a Nothing if given something equal to the default value. n.b. this makes use of Prism.only under the hood along with the more general version non' :: APrism' a () -> Iso' (Maybe a) a.
non :: Eq a => a -> Iso' (Maybe a) a
And to use the example from the lens docs:
>>> Map.empty & at "hello" . non Map.empty . at "world" ?~ "!!!"
fromList [("hello",fromList [("world","!!!")])]
We can translate that into JS using flunc-optics, starting with a LensP {a} (Maybe a) targeting an object's foo key.
const fooLens = Lens.atObject('foo');
To see how this behaves, we can attempt to get foo using view both when it does and does not exist and then set a Just and a Nothing value to update or remove the foo value.
Getter.view(fooLens, { 'foo': 1 }); // Just(1)
Getter.view(fooLens, { 'bar': 1 }); // Nothing
Setter.set(fooLens, Maybe.Just(5), { 'foo': 1 }); // {"foo": 5}
Setter.set(fooLens, Maybe.Nothing(), { 'foo': 1 }); // {}
We can then compose this Lens {a} (Maybe a) with another, using Iso.non({}) between the two which handles the constructing a default value when it receives a Nothing.
const fooLens = Lens.atObject('foo');
const barLens = Lens.atObject('bar');
const fooBarLens = R.compose(fooLens, Iso.non({}), barLens);
And we can see what this looks like for various forms of the { foo: { bar: a } } target.
Setter.set(fooBarLens, Maybe.Just('baz'), { a: 1, foo: { b: 2, bar: 'bob' } });
// {"a": 1, "foo": {"b": 2, "bar": "baz"}}
Setter.set(fooBarLens, Maybe.Just('baz'), { a: 1, foo: { b: 2 } });
// {"a": 1, "foo": {"b": 2, "bar": "baz"}}
Setter.set(fooBarLens, Maybe.Just('baz'), { a: 1 });
// {"a": 1, "foo": {"bar": "baz"}}
Setter.set(fooBarLens, Maybe.Just('baz'), {});
// {"foo": {"bar": "baz"}}
The ?~ operator used in the Haskell example above is a convenience for setting a Just value, which would be equivalent to setJust = (l, b, s) => Setter.set(l, Just(b), s) and could be used to clean up some of the above.
So yep, this is possible using a combination of Lens, Iso and Prism, but it'd be nice to hide the complexity away with some specialised functions.
And I'll be sure to set aside some time to take a proper look at partial.lenses when my work settles down a little :smile:
After playing with this a little more, it looks like we can get a decent way towards the above using mostly existing functions in Ramda (with some minor tweaks).
The one new function needed is:
// iso :: (s -> a) -> (b -> t) -> Iso s t a b
// where Iso s t a b :: Functor f => (a -> f b) -> (s -> f t)
const iso = (to, fro) => afb => s => map(fro, afb(to(s)))
Then we can create something similar to non as mentioned in the previous comment:
const defaultIso = def =>
iso(a => a === void 0 ? def : a, when(equals(def), always(void 0)));
This allows going from an undefined value to some default value. In reverse it will produce undefined if the value is equal to the default.
We then need to make a slight tweak to the existing lensProp function to remove the key from the object if the value becomes undefined (I believe this change would also make lensProp satisfy the lens laws too).
const lensProp_ = k =>
lens(prop(k), (v, obj) => v === void 0 ? dissoc(k, obj) : assoc(k, v, obj));
A simple Iso can then be defined to default to an empty object when given an undefined value. This Iso can be used in a composition of lenses to handle the possible absence of a key in an object.
const partialObj = defaultIso({});
const fooBarLens = compose(lensProp_('foo'), partialObj, lensProp_('bar'));
set(fooBarLens, 1, {}); // {"foo": {"bar": 1}}
:fireworks:
After playing with this a little more, it looks like we can get a decent way towards the above using mostly existing functions in Ramda (with some minor tweaks).
Yes, of course. :)
As implied in the partial lenses docs, it reuses the basic implementation strategy (van Laarhoeven) from Ramda. What is different is that all the lens combinators are implemented with the assumption that inputs and outputs are optional. In particular, (but not just) primitive lenses like prop and index essentially incorporate the mechanisms built above. Because all lenses then work with optional inputs and outputs one can then compose them without extra glue (like partialObj above) and it "just works".
Here is some more background in case it might interest.
We are using lenses with observables heavily. Basically, a single UI component may show multiple observable properties of lensed data and there can be lots of components on a single page. Memory usage, in particular, can become significant rather quickly.
We also use lenses heavily in the sense that we define (using the combinators) lots of them. To get an idea, we have source files with multiple consecutive screenfuls of definitions of lenses to manipulate (non-trivial) JSON models. Keeping lens definitions concise is definitely of value.
We also specifically deal with JSON models "as they come" (from external services). The way things are now is _very_ simple. Changing the models (which would mean transforming them on the client side) to suit a lens library would likely only add more code and make things more complex. E.g. requiring that models would use wrappers like an explicit Maybe do not work for our needs/wants.
Aside from a few combinators introduced for theoretical completeness/curiosity, all the combinators in the partial lenses library have been introduced due to an actual use case for the combinator.
At the time when our project started the lenses that Ramda provided out of the box were and still are simply insufficient for our needs, which is why we started writing our own lens combinators (e.g. find was one of the first) which lead to branching the work to the partial lenses library which has served us well. Although we use the library for work, I've developed it mostly on my own time. Personally I wouldn't mind seeing a non-over-engineered performant lens library with sufficient functionality to replace partial lenses, and I hope that mentioning the partial lenses here will ultimately help towards that goal. :)
The next thing I have in mind for the partial lenses library, if I get around to do it, is to see how much there is room to improve performance and reduce memory usage (and the end result might be to drop the use of the van Laarhoeven implementation (or to keep it)).
We then need to make a slight tweak to the existing
lensPropfunction to remove the key from the object if the value becomesundefined(I believe this change would also makelensPropsatisfy the lens laws too).
@scott-christopher, does this assume undefined is never used as a property value? If it were wouldn't we lose track of whether the property was present, forcing us to choose {} or {x: undefined} without knowing which is correct?
@davidchambers: does this assume
undefinedis never used as a property value?
Yep, it does. This is necessary if we wanted to make lensProp law abiding while sticking to the behaviour of prop(k) returning undefined if the property doesn't exist.
e.g. currently:
l = lensProp('a');
a = {};
equals(a, set(l, view(l, a), a)); // false -- should be true
If it were wouldn't we lose track of whether the property was present, forcing us to choose
{}or{x: undefined}without knowing which is correct?
It would. By just relying on undefined, the lens would have to treat fetching a key that didn't exist the same as fetching a key whose value is undefined. I don't personally have issue with this, though I can appreciate this might be a deal breaker for some.
However, if we introduced a Maybe here instead of overloading undefined as Nothing we could then be certain we're not confusing an undefined value with one that doesn't exist.
// pretend for a moment this exists in Sanctuary ;)
S.lensProp = k =>
lens((obj) => k in obj ? S.Just(obj[k]) : S.Nothing(),
(m, obj) => S.fromMaybe(() => dissoc(k, obj), map(v => () => assoc(k, v, obj), m))());
const maybeIso = def =>
iso(S.fromMaybe(def), ifElse(equals(def), always(S.Nothing()), S.Just));
view(S.lensProp('a'), {});
// Nothing()
view(S.lensProp('a'), { a: 42 });
// Just(42)
view(maybeIso({}), S.Just({ a: 42 }));
// {"a": 42}
view(maybeIso({}), S.Nothing());
// {}
const abL = compose(S.lensProp('a'), maybeIso({}), S.lensProp('b'));
set(abL, S.Just('foo'), {});
// {"a": {"b": "foo"}}
set(abL, S.Just('foo'), { a: { b: 'bar', c: 'baz' }, d: 'moo' });
// {"a": {"b": "foo", "c": "baz"}, "d": "moo"}
set(abL, S.Nothing(), { a: { b: 'bar', c: 'baz' }, d: 'moo' });
// {"a": {"c": "baz"}, "d": "moo"}
// The laws
const l = S.lensProp('a');
const a = {};
const b = S.Just('b');
const c = S.Just('c');
equals(a, set(l, view(l, a), a));
// true
equals(b, view(l, set(l, b, a)));
// true
equals(set(l, c, a), set(l, c, set(l, b, a)));
// true
Closing as the desired transformation can be achieved via:
const foo = R.lensProp('foo');
const bar = R.lensProp('bar');
R.over(R.compose(foo, bar), R.merge({b: 2}), {foo: {bar: {a: 1}}});
// => {foo: {bar: {a: 1, b: 2}}}
@davidchambers to get the correct merge order I would do R.merge(R.__, {b: 2})
R.over(R.lensPath(['foo','bar']), R.merge({b: 2}), {foo: {bar: {b: 1}}});
// => { foo: { bar: { b: 1 } } }
R.over(R.lensPath(['foo','bar']), R.merge(R.__, {b: 2}), {foo: {bar: {b: 1}}});
// => {foo: {bar: {b: 2}}}
Good point, @keeth. R.assoc('b', 2) is another option.
I'm use this to add new properties to the objects (in the list):
const assocFn = R.curry((prop, fn) => R.over(
R.lens(
fn,
R.assoc(prop),
),
R.identity,
));
// data: [{}]
const normalizedObjectList = R.pipe(
// ... here come filters,
R.map(R.pipe(
// adds deepNestedId
assocFn('deepNestedId', R.path(['deep', 'nested', 'object', 'id'])),
// sum of other properties
asscocFn('sumAB', R.pipe(R.props(['a', 'b']), R.sum)),
// ...etc
)),
)(data);
I've read most of the comment but didn't find anything similar to updateIn in clojure or update in lodash. Just want a path and a transformer function
@ioRekz This could help: https://github.com/ramda/ramda/wiki/Cookbook#map-over-the-value-at-the-end-of-a-path
Please consider merging #2011 as it introduces the missing functionality.
here's one idea for a solution ... updateIn and various derivatives, especially for looping over sets of nested objects to update other nested objects ( add this set of edges to all incident vertices ) would be cool
const makePath = ({ group, id}) => [group, id] // ["vertices", 123456]
const updateObject = (object, change) => {
const objectPath = makePath(object);
return assocPath(objectPath, merge(object, change))
};
Most helpful comment
Lenses (revamped in #1205) solve exactly this problem. Lenses are a general, composable mechanism for accessing a particular portion of a data structure (the focus), "setting" the value of the focus, or applying a transformation to the focus. A lens specifies the path to its focus in both directions: how to access the focus, and how to recreate the entire data structure after setting or transforming the focus.
Lenses obviate the need for the following functions:
R.assocR.assocPathR.dissocR.dissocPathR.adjustR.updateR.pathR.pathEqR.propR.nthR.evolveMany of these are useful shorthands, of course, but I'm not sure there're all warranted. ;)