const also = (func, var) => [func(var), var]]
const concurrent = (fns, var) => append(id, flip(map)(applyTo(var)))
const progress = (fns, var) => scan( (acc, fn => fn(acc ), var)
also(toUpper, 'foo') // ['FOO', 'foo]
concurrent([toUpper, toLower, concat('Bar')], 'Foo.Bar') // ['FOO', 'foo', 'FooBar']
progress([toUpper, concat('BAZ'), split('.')], 'Foo.Bar')
// ['FOO.BAR','FOO.BAR.BAZ', ['FOO', 'BAR', 'BAZ']]
I feel that a common barrier in writing pipes has been losing information that one later needs. Existing solutions using ramdas existing API that I am aware of feel overpowered and difficult to reason about. I feel adding simple convenience functions like these provides a much more accessible alternative.
For example
pipe(
....,
chain(head, append),
....
)
pipe(
....,
ap(head, append),
.....
)
To understand these,one is forced to think at an extra level of abstraction (function monad, S combinators), and one to do it all at once. While it can be used, I feel the abstraction increases the barrier of entry for endusers, and it still lacks the power of concurrent/progress for this case.
pipe( ...., list => converge(flip(append)(list), [head])(list), ....)
Other solutions are even more unwieldy.
pipe(
...,
also(head),
apply(append),
...
)
pipe(
...,
concurrent([head]),
apply(append),
...
)
pipe(
...,
progress([head]),
apply(flip(append)),
...
)
With these functions we can split this into two trivial steps, and easily describe the flow of whats going on.
In general, it seems ramda provides a lot of ways of transforming data and converging collections of data into points, and I feel it would be nice to provide a simple way of covering the other case, that of diverging data into paths.
I also feel that this is a logical extension of the existing API. Ramda already provides a lot of constructs for conditionally branching into different paths (when, cond, ifElse, etc), whereas this provides unconditional branching.
Note the symmetry that they share with existing low level ramda functions:
concurrent : _apply N functions concurrently to 1 variable with N results_
map: _apply 1 function concurrently to N variables with N results_
progress: _apply N functions consecutively to 1 variable with N results_
scan: _apply 1 function concurrently to 1 variable with N results_
That's an interesting idea. If you want to write up a pull request for it, I can promise it will get a fair hearing. No guarantees it would be accepted, of course.
I do have some concerns.
I'm not thrilled with using fork for a name here, as that seems to have too many other FP meanings. But I don't have an immediate alternative.
Another hesitation has to do with the fact that this solves the most common problem but doesn't try to address others. What happens if we want access to the previous two results? Is there some way in which it would be better to introduce some sort of do-notation?
Finally, there is something less than ergonomic about needing to use apply here. It makes me wonder if there is some pipeWith version that would clean this all up. I'm pretty sure there's not, but that I'm thinking about this means I still find something not quite satisfying in this solution.
But overall, my instinctive response is positive. I'll try to give it some deeper thought.
Thats good to hear. What about branch/branchAll?
What happens if we want access to the previous two results?
Yes, I had thought about this aswell. Something like.
const forkThrough = (fns, var) => scan( (acc, fn => fn(acc ), var)
Though this is more abstract and covers a less common use case. I was really proposing this more based on its simplicity. Low hanging fruit that could give extra power to a novice of functional programming, without too much abstraction.
But maybe that could round out the set. branch/branchAll/branchThrough
It makes me wonder if there is some pipeWith version that would clean this all up
Something like
pipeWithBranch(..., branch(head), append, ...) ?
Maybe are there are possbilities there, but I feel like that adds conceptual overhead without giving any more power than composition of my proposed functions. I feel it also lacks extensibility.
On the other hand, its easy to add further simple, easy to grasp functions to further augment building of sophisticated unidirectional workflows inside an ordinary pipe, such as:
const pipeLeft = (fns, xs) => adjust(0, apply(pipe)(fns ), xs)
const pipeOn = (i, fns, xs) => adjust(i, apply(pipe)(fns )), xs)
const mapEach = (fns, xs) => zipWith(call, fns, xs)
On reflection, I think the forkThrough function I suggested to address one of the shortcomings you mentioned, is the exact logical complement of forkAll.
call applies a single function to a target
pipe applies multiple functions consecutively to a target with a single result
forkAll applies multiple functions concurrently to a target with multiple results
forkThrough applies multiple functions consecutively to a target with multiple results
There is inarguably a symmetry there.
I had said before that these functions would be a logical extension of ramdas API (namely the call function), but I think more than that, they are a logical _completion_ of a part of ramdas API.
forkThrough is to unapply(pipe), what scan is to reduce, and it is to scan, what applyTo is to call.
forkAll is essentially a simplified unconditional analogue to cond.
The names should probably be changed admittedly. Perhaps something like also/conc(concurrent)/prog(progressive) instead of fork/forkAll/forkThrough
I do like branch better than fork here, but it still doesn't feel great.
When I was suggesting pipeWith, it was in the (probably forlorn) hope that the way pipeP = pipeWith(then) replaces pipe(makePromise, then(f1), then(f2), then(f3)) with pipeP(makePromise, f1, f2, f3) or pipeK = pipeWith(chain) replaces pipe(foo, chain(bar), chain(baz)) with pipeK(foo, bar, baz)), we could write some function that would magically let us remove the fork and apply calls. But I really think this is a, ahem, pipe dream.
But when I was talking about solving the more general problem I meant to suggest that the declarative nature of piping is often helpful and feels more FP-ish than a solution that stores a number of temporary variables. But the latter is strictly more powerful, and I was wondering if there was any good way to bridge that gap.
I woke up overnight with a thought in mind, and I've just found a few minutes to code it up. It's still less ergonomic than I would like, but it does seem to actually offer a reasonable middle-ground. We might use it like this:
const foo = pipeline (
f1,
{as: 'b', fn: f2},
f3,
{as: 'd', with: ['b', '_'], fn: f4},
{with: ['d', 'b'], fn: f5}
)
// foo ~= (...args) ==> f5 (f4 (f2 (f1 (...args)), f3 (f2 (f1 (...args))), f2 (f1 (...args))))
...which would be somewhat equivalent to
const foo = (...args) {
const a = f1 (...args)
const b = f2 (a)
const c = f3 (b)
const d = f4 (b, c)
return f5 (d, b)
}
We can name the output of a step in the pipeline by supplying as, and we can use previously named values by supplying with. If we're doing neither, we can just supply the function. Here the '_' associated with f4 represents the (unnamed) result of the previous step. It could conceivably be replaced with Ramda's placeholder.
This demo code here is simply to show all the possibilities. It is likely only to be an improvement on the explicit temporary-variable version when you need the result of step 1 in steps 14 and 17. If used a lot it would be too ugly.
There's a naive implementation of this on the REPL.
This is very much a thought in flux. The minute I start to feel happy about it, I come back to my standard advice to not make a fetish out of point-free. Then, when it starts to look too overwrought, I start thinking of the times I would have used this (or a fork/apply implementation) to simplify things.
I considered other variants of this, although I didn't try to implement them:
const foo = pipeline (
f1,
as ('b) (f2),
f3,
as ('d') .with (['b', '_']) (f4),
with (['d', 'b']) (f5)
)
const foo = pipeline (
f1,
f2,
as ('b'),
f3,
with (['b', '_']),
f4,
as ('d'),
with (['d', 'b']),
f5
)
There's something more satisfying about these to my eyes, but they would probably be a bit more involved as an implementation. If nothing else, it would involve adding not just pipeline, but as and with. Really what would seem best would be as a language extension:
const foo = pipeline (
f1,
f2 -> a,
f3,
f4 <- (a, _) -> b,
f5 <- (b, a)
)
That's really clean. But I'm not even slightly interested in pursuing that.
I would appreciate your feedback of my last 3 posts. For convenience, I can refine and sum up them up again here (maybe I should rewrite my OP at this point).
I would provisionally propose the names also/concurrent/progress instead of fork/forkAll/forkThrough, though the names are not really the important part right now.
The main thing I think you should consider (apart from how powerful they are) is how fundamental these functions actually are, and what perfect symmetry they have with existing low level functions.
Note the interesting symmetry of:
pipe : _apply N functions consecutively to 1 variable with 1 result_
reduce: _apply 1 function consecutively to N variables with 1 result_
Notice progress and concurrent exhibit equal symmetry with low level ramda functions
concurrent : _apply N functions concurrently to 1 variable with N results_
map: _apply 1 function concurrently to N variables with N results_
progress: _apply N functions consecutively to 1 variable with N results_
scan: _apply 1 function concurrently to 1 variable with N results_
I don't know about you, but I think this is something very interesting to think about.
Utimately, it comes down to the philosophy of ramda. If its philosophy is to build a library of functional building blocks in an ordered manner, giving precedence to more atomic functions like map, filter, reduce and the like, then I think also/concurrent/progress should logically be added alongside them.
I must admit I do also like your as/with idea. They are quite clear and seem like an answer to let/where keywords in functional languages.
My personal favourite variant of this would be something like:
pipe(
as('bar', foo),
with('bar')(baz)
)
Perhaps extending pipe, instead of having a second pipe function, making with variadic, with the previous value automatically passed in.
I find having an arrary with a placeholder with __ feels a little clunky. I dont see why a function in a pipe shouldnt always receive the previous value, and if it didn't, with variables being diverted or passed through, I feel it becomes a quite confusing crisscross of traffic.
Overall, I agree that its it is a good answer to my original use case.
However I don't think it should be thought of an either/or situation. They're both elegant potential additions that would be used in different cases.
I've used the words concurrently and consecutively in decribing these concepts. Perhaps independent and cumulative, respectively, are the more accurate words. On that note, something like each and accum now appeal to me more than concurrent and progress respectively. I believe they are conceptually elegant enough to warrant snappier names such as these, though sadly these particular examples clash with resident functions arguably less deserving of a place, namely the impure forEach and the cumbersome mapAccum function.
Maybe even run, instead of accum, as in running total. onto, instead of each, could also work as it a surjective relation.
Another observation: converge is essentially a combination of concurrent and apply, doing the work of concurrent as an inner step. I really believe if a function can be broken into smaller building blocks, those blocks should have been included in the API before that function and should certainly be added on discovery.
A quick meta-discussion comment first: Here on Ramda, and in most GitHub communities I've seen, we act differently than we might on, say, StackOverflow, where the goal is to get individual questions and answers as complete as possible. Here we usually edit a post only for minor clean-up/clarifications. We use further comments as the mechanism for discussion. I've found it disconcerting to try to respond to these very interesting ideas, because the things I think I'm commenting on keep changing underneath me and new ideas are being added to things I thought complete.
I also found it harder to follow and work with because the sample implementations provided are mostly broken. (That var is not a legal identifier is only one of several problems.) Working code makes these things easier to deal with.
Now to the main ideas:
I find this whole idea intriguing. And whereas I thought the original motivation was a simple way to enhance pipelines, it has morphed into discussion of some useful -- perhaps extremely useful -- functions not already exposed in Ramda. I like these functions a lot.
We can bikeshed more on the names later: we all know that it's one of the two hardest problems in computer science. But you're right that the parallels are compelling
(Note that this one seems incorrect:
scan: apply 1 function concurrently to 1 variable with N results
scan takes an array of values.)
Here is my initial implementations of these. They would need to be gone over and converted to current Ramda style. But I think these work well enough for discussions now:
const also = (fn) => (x) =>
[fn (x), x]
const progress = (fns) => (x) =>
tail (scan ((acc, fn) => fn (acc), x, fns))
const concurrent = (fns) => (x) =>
map (applyTo (x), fns)
also (toUpper) ('foo')
//=> ['FOO', 'foo]
progress ([toUpper, concat (__, 'BAZ'), split ('.')]) ('Foo.Bar')
//=> ['FOO.BAR', 'FOO.BARBAZ', ['FOO', 'BARBAZ']]
concurrent ([toUpper, toLower, concat ('Bar')]) ('Foo.Bar')
//=> ['FOO.BAR', 'foo.bar', 'BarFoo.Bar']
You can also play with these in the Ramda REPL.
I will try to respond to your as-with comments tomorrow. But mostly, I think that discussion, which was motivated by your initial pipeline enhancement work and has less to do with this proposal has since developed, probably belongs elsewhere.
@KayaLuken:
Sometime soon, after giving it more thought, I will consider opening a new issue regarding the as-with functions, but I would like to respond to your suggestions:
My personal favourite variant of this would be something like:
pipe( as('bar', foo), with('bar')(baz) )
My original version looked almost exactly like that. There is a problem though for functions that need both as and with; no matter what I tried with functions like this, something ended up looking ugly. This seemed the best of them: as ('d') .with (['b', '_']) (f4) or with (['b', '_']) .as ('d') (f4), which are slightly odd syntactically, but which, if my scribbled middle-of-the-night notes are correct, we could implement without too much difficulty.
Perhaps extending pipe, instead of having a second pipe function
Ramda tries very hard to avoid this. When there are several implementations (like map for lists, objects and functions), it's only because we see a common abstraction (Functor) that those types all share. pipe as it stands is simply an order-reversed version of compose, often easier to read, but not logically any different. It can be implemented simply and efficiently. Such a pipeline operator would be inherently different, maintaining a context object, type-checking each step, etc. I doubt we'd want to try to combine them.
making with variadic, with the previous value automatically passed in.
I'm hoping in v2.0 (if we ever reach 1.0!) to replace the few remaining variadic functions (pipe, compose, etc.) with ones that take an array instead. I think it would be great if Ramda had no variadic functions at all, so I'm not much interested in adding another one. (BTW, this has nothing to do with supporting users' variadic functions. I think that's one thing on which Ramda and Sanctuary will forever disagree.)
I find having an arrary with a placeholder with __ feels a little clunky. I dont see why a function in a pipe shouldnt always receive the previous value, and if it didn't, with variables being diverted or passed through, I feel it becomes a quite confusing crisscross of traffic.
While the array does feel clunky here, I stand by my no-more-variadic-functions notion, and would still prefer to keep it. But as to the other point, forcing the users to always accept the previous value, and in a fixed position, makes this much less useful to my mind. Yes, the placeholder, or any token we could choose is a bit nasty, but it would only be used in those cases where the user is already saying, "The function for this step is not a standard receive-the-previous-results one. Instead, it has the following signature: ...". In that case, allowing the user to specify if and where the previous result is to be used makes sense. One other variant, simply would require her to use as in the function before with if she wants to use that result. That would avoid the clunkiness of the placeholder, at the expense of a minor inconvenience. And it would probably be easier to understand.
In any case, I will think on this a little more, and if I want to present it, I will open a new issue. As the current issue has evolved, this now seems an entirely orthogonal concern.
@ramda/ramda-core:
I'd love to hear any feedback on the ideas here -- not my as-with stuff on which I still want to stew a bit before deciding whether to even really propose it -- but the suggested functions presented best in https://github.com/ramda/ramda/issues/2930#issuecomment-562350369.
Do these seem like generally useful functions? Are the problems I'm not seeing at first glance?
Do these seem like generally useful functions? Are the problems I'm not seeing at first glance?
I've used something like concurrent already. It is very useful to kick off parallel tasks and collect all the results later:
Promise.race(concurrent([x => fetch(`https://httpbin.org/get?foo=${x}`, {mode:'cors'}),
x => fetch(`https://httpbin.org/get?bar=${x}`, {mode:'cors'})])
("baz"))
.then(res => res.json())
.then(data => console.log(data.args));
// ~> e.g. {"bar":"baz"}
I'd understand a function like progress as a function which produces some chained results, or as a function pipeline returning all intermediate results. Perhaps that's useful when one needs to find a particular intermediate result...
const
vendor = (litersInStock, pricePerLiter, product) => customer => {
const withdrawnMoney =
Math.min(customer.balance,
Math.min(litersInStock, customer.quantityRequested) * pricePerLiter),
litersHandedOut = withdrawnMoney / pricePerLiter;
litersInStock -= litersHandedOut;
return over(lensProp('balance'), subtract(__, withdrawnMoney), assoc(product, litersHandedOut, customer));
}
const shopMilk = vendor(5, 0.5, 'milk'), shopVodka = vendor(1, 4, 'vodka');
const me = {balance: 2, quantityRequested: 2};
const tour1 = progress([shopMilk, shopVodka])(me);
// ->[{"balance":1,"quantityRequested":2,"milk":2},{"balance":0,"quantityRequested":2,"milk":2,"vodka":0.25}]
const tour2 = progress([shopVodka, shopMilk])(me);
// ->[{"balance":0,"quantityRequested":2,"vodka":0.5},{"balance":0,"quantityRequested":2,"vodka":0.5,"milk":0}]
find(propEq('balance', 0))(tour2)
// ->{"balance": 0, "quantityRequested": 2, "vodka": 0.5}
(Sorry for the long example, but it's very late)
I feel that a common barrier in writing pipes has been losing information that one later needs.
Yes I totally agree. I find that idea of inner pipeline branching - joining via also - applynice. Also the "as-with stuff" looks very powerful, however refactoring may become difficult if I decide to rename a branch key I have to inspect carefully every step in the pipeline.
I would like also to give attention to another low level function closely related to concurrent and progress.
const mapEach = (fns, xs) => zipWith(call, fns, xs).
mapEach seems an essential counterpart to the original trio, being able to "carry on where they leave off".
pipe(
progress([returnFoo, returnBar, returnBaz]),
mapEach([foo, bar, baz ])
)
Its also useful in its own right, sharing the ability of these functions to allow you to "do things in one pass". For example consider the problem of traversing an unknown data structure and collecting data. With mapEach, this would seem to be quite easy.
until(
predicateForLeftItemOfArray,
mapEach([traverseDataStructure, getDataAtThisPoint])
)
One use for concurrent is in making multiple reads of an data structure at once, allowing a flatter, cleaner, alternative to cond/ifelse/when/and/or for more involved use cases.
Looking at https://ramdajs.com/docs/#cond, everything is hardcoded into one nested array, but with concurrent and mapEach, the predicates and return functions can be extracted and acted on seperately. This is more powerful and allows for handling of more complex cases, such as if the results of the predicates are interdependent.
I also noticed that a lot of higher level ramda functions can be built quite easily from these functions, for example partition and converge, Another testament to how fundamental they are.
@semmel:
While that is a fine example of the use of concurrent, I would hesitate to actually use it in documentation, as people already think of concurrency when they see Promise (for better or forw worse) and I would hate to give the impression that this is only for asynchronous things.
And yes, your other example give the flavor of progress well; I'm sure there's shorter ones, but it works for me.
I would expect the as-with construct would not be very difficult to refactor, as it has nothing to do with the functions' parameter names but only the identifiers used to pass them between steps Unless your pipeline involves many dozens of functions, I would expect it to be easy to rename.
@KayaLuken: That mapEach is a function I keep reaching for, thinking that we must already have it and that I simply forgot its name. So yes, I definitely see the use.
As to building other functions out of these, while we could do so, Ramda usually chooses not to do so, to avoid blowing users' performance budget in library code. But that does not make the abstractions any less powerful.
@CrossEye
While that is a fine example of the use of concurrent, ... I would hate to give the impression that this is only for asynchronous things
Yes indeed, but on the other hand people might misunderstand it for something which runs functions concurrently; In concurrent ([toUpper, toLower, concat ('Bar')]), toUpper and toLower run in the main JavaScript thread sequentially and not concurrently.
With side effect-free functions this might not matter.
Perhaps it's just that, with the pervasiveness of asynchronicity in JS I may have a problem with the name concurrent.
@semmel: You're absolutely right. There needs to be a real discussion about the names. I just want to make sure that people are on board with the idea before someone spends the effort on a PR for this.
I think these are very useful functions. And I'm wondering if others do so too.
Yes, my naming was based on slightly flawed terminology in describing these functions' behaviour. I should have used _independent_ and _cumulative_ or _interdependant_ instead of _concurrent_ and _consecutive_ respectively, but I'm not sure how to create a snappy name from that.
I would prefer simply calling it _each_ instead of _concurrent_. I think its a conceptually elegant enough to deserve such a name, although that it overlaps with _forEach_ (which is not even a pure function anyway).
@KayaLuken: We'll figure out the names. I have no problem with each. forEach can just go to hell! :smiling_imp: While forEach itself is pure, its only purpose is to do impure things -- sort of like Congress! -- and it's one of the few remaining functions like that.
I'd like to give a few more days for feedback, but unless something major comes up, I think we should try this. Feel free to start a PR if you want to, or I can do so.
So any other feedback, @ramda/core, @Bradcomp, @GingerPlusPlus, anyone?
Can we see the type signatures?
Thank you for mention, I ignored the issue due to "concurrent" in the title :P
mapEach already exists: [evolve].
BTW., evolve doesn't need to be recursive, explicit recursion seems to work equally well.
concurrent (horrible name) already exists: [juxt], [applySpec] (#1774).
I see no use for progress, but it could easily be implemented by user if needed as scan1(applyTo) if scan1 gets added to Ramda.
I see no use for also, but it can be easily implemented by user if needed as also(fn) = progress([fn]) = chain(pair, fn).
Is there anything I missed?
(horrible name)
I've already said this was wrongly named and should be changed. Its just a placeholder.
concurrent already exists
No it doesn't. applySpec returns an object. juxt is a mess that seems to return a variadic function that bundles everything together into a blackbox and applies everything to everything. It has nowhere near the purity or clarity of concurrent nor does it even have the same semantics. concurrent simply returns an array of the results of the functions, that can then be used seperately (kind of the point of the function). Nice, simple and clean. juxt does not do this in general case.
if scan1 gets added to Ramda
If you're going to bring up some hypothetical function that doesn't even exist, you should at least describe the function, or supply a link. How do you expect some random reader of this to know what you are referring to?
but it could easily be implemented by user if needed as scan1(applyTo)...chain(pair, fn)
I don't see the value of this argument at all. Of course it can be implemented by other ramda functions. Literally every possible ramda function can be derived from other ramda functions, often in far more trivial ways than chain(pair, fn). Should inc not exist because its add(1)? Should we remove applyTo because its just flip(call)?
The purpose of this library is to provide a logically organised and convenient toolset of simple, easy to use functional building blocks usable for the general public (alongside more powerful abstractions for users who have the need and ability to use them).
I see no use for progress...also
A general use case for these functions has been stated repeatedly in this thread; to provide a simple and easy to use (but powerful) way of preserving results of prior or simultaneous computations when writing pipes.
mapEach already exists: evolve
Well, this at least seems to be true. evolve appears to cover the semantics of mapEach, albeit in a secret, undocumented way that probably noone knows to use.
Can we see the type signatures?
Informally, they would look something like this:
also: a -> (a -> b) -> [a, b]
concurrent: [a -> b] -> a -> [b]
mapEach: [a -> b] -> [a] -> [b]
progress: [a -> a_1, ..., a_(n-1) -> a_n] -> a -> [a_1, ..., a_n]
@CrossEye
I think we should try this
Good to hear. For me, they fill a logical gap in ramdas space and it would be a shame if they weren't added.
I'm quite busy for the foreseeable, but down the line I can supply PR if noone else steps up..
I just don't want to end up with 4 (!) similar but subtly different functions (ap, applySpec, "concurrent", juxt). I wouldn't mind deprecating juxt and applySpec in favor of cleanly implemented "concurrent",
I was wrong, scan is the right function for implementing "progress", not [scan1].
While I understand the idea of "progress", I can't imagine ever using it. Can you provide an example demonstrating its usefulness?
juxt should be deprecated in either case. Its poorly defined and unidiomatic to ramda.
I don't think ap and concurrent (lets just call it each from now on) are that similar. In fact, they are exactly as different as xprod and zip.
I don't have a problem with applySpec per se, but I would say there appears to a general faultline of ambiguity running through this library when it comes to how and if functions pull double duty with objects and arrays. Neither evolve nor applySpec are documented to do anything with arrays with only objects in their type signatures. Yet they both secretly can, with one coercing it to an object, while the other returns an array. There appears to be no real rhyme or reason for any of this.
Applying that topic back to this thread, I feel it would be a step in the right direction if applySpec and evolve stick to doing object things (as advertised), and choke when given arrays. I find mapEach extremely useful and usable and I believe it deserves to be its own function, rather than hidden as a sideeffect/hack of another less well behaved function. Perhaps we could even add a value contraint ( a la dependent typing) and make it throw an error when given unequal arrays.
As regards concrete use case for progress, admittedly I don't have one. I believe its a pattern that becomes relevant in more complex problem spaces, ones where you're performing a series of lossy transformations and need to simulate rolling back back to previous states. Perhaps for example if you're traversing a complex data structure thats runs down a blind alley and you need to rollback and use a different strategy. I imagine if I wanted to write an entire application in a pipe, this would be a function that would empower to me to do it. But this is just speculation. Does it have applications in everyday cases that each/also/mapEach don't? I don't know.
If you see no concrete use-case for "progress", then I see no reason to add it to the library.
However, feel free to add it to [Cookbook]; name idea: steps.
I prefer documenting array support in applySpec (#2396) and evolve over adding more functions.
However, I would prefer applySpec (and others) to not deal with variadic functions (#2252, #2007, #1961, #2859).
~/JS/ramda (master $=)
$ ./repl
> applySpec ([inc, dec]) (1)
[ 2, 0 ]
const each = fns => x => ap(fns, [x])
_I don't think ap and [each] are that similar. In fact, they are exactly as different as xprod and zip._
const each = fns => x => ap(fns, [x])
What Im saying is, ap and each are functionally as distant as xprod vs zip : N -> N * M vs N -> N mappings. I don't see that ap blocks each, any more than xprod blocks zip. If anything each, like zip, is more useful than its N*M compatriot. juxt however, should be put to pasture.
applySpec ([inc, dec]) (1)
applySpec ([inc, dec]) (1) returns {"0": 2, "1": 0} in the https://ramdajs.com/repl (0.26.1)
I prefer documenting array support... evolve over adding more functions
But its still a strictly array based functional pattern (and a useful one at that) thats randomly hidden in the _Object_ section.
Why should evolve return an array instead of coercing it into an object like the other _Object_ functions? Why should omit/remove, insert/assoc, map/mapObjIndex etc exist as pairs but evolve not be split into _List_ and _Object_ counterparts in a similar way?
Its more logical that evolve have its pathological behaviour corrected and mapEach be introduced into the _List_ section, where people are more likely to find it and actually use it.
I also dont think "no more, we're full" is a good argument. This is a functional library. If something has a logical place it should be added, and less well defined functions culled (or corrected), if necessary.
each. That's useful regardless wether it's a sync or async (promise or single value stream returning function) pipeline. To collect each's fanning out there are the wellknown counterparts R.apply, R.apply * Promise.all or R.apply * Bacon.combineAsArray, R.apply * RxJs.combineLatest.Edit:
mapEach = R.zipWith(R.call) I regard also as useful.I would not use also very much since I understand it as specialisation of each: also = fn => each([fn, R.identity])
I can not imagine a use case for steps.
@semmel as of master, each = applySpec or juxt, and mapEach = evolve.
@KayaLuken would you also prefer deprecating map in favor of o, mapObjIndexed, mapArray, mapFL, (future) mapIter and mapMap? If not, what's the difference?
I'd actually prefer update to be merged into assoc, other functions you mentioned are actually different.
It's true that generic functions don't fit well into exclusive categories.
@GingerPlusPlus Yes OK, But in my experience with ramda I came to the conclusion not to rely on undocumented behaviour; since the implementation may change even faster as functions become deprecated. So if the documentation of evolve is updated to list arrays (mhh I guess it's 'Lists' with ramda) I am all for it. evolve is also a quite nice name!
Regarding juxt - apart from that it's documentation is insulting (Edit: OK maybe I can wrap my head around that name) and it's name is confusing ("Juxtaposition is the fact of two things being seen or placed close together with contrasting effect" WTF?) - it seems, that it just does not like variadic functions very much.
each([Math.min, Math.max])(4) //-> [ 4, 4 ]
R.juxt([Math.min, Math.max])(4) //-> [Function]
// or
var addBothOrIncBy10 = (a, b) => a + (b || 10);
each([addBothOrIncBy10, addBothOrIncBy10])(7) //-> [ 17, 17 ]
R.juxt([addBothOrIncBy10, addBothOrIncBy10])(7) //-> [Function]
Otherwise it seems to do the trick too.
@semmel:
Ramda generally uses the reported arities of functions supplied to it to create a function of the appropriate arity. This allows, for instance, juxt ([add, subtract]) (5) (3), auto-currying the resulting function. So this variant would work: const addBothOrIncBy10 = (a, b = 10) => a + b. If a function misreports its length, this could cause problems. But this is not limited to juxt; it could happen in many places.
Its documentation is... let's say underwhelming, I'm afraid. Any suggestions?
I just dont see why juxt needs to kept around at all. Its like an uhygienic, unfunctional middle ground of each and ap.
Documentation isn't the problem here, its behaviour is inherently more complex than each/ap, which are arguably _more_ powerful.
For example, I can't do something like juxt([add, applyTo])(5) because it gives me a function that always crashes. I have to pointlessly worry about how seperate functions relate to eachother.
With each, I just get [add(5), applyTo(5)]
With juxt, I have to add conditions to check for whether it has returned an array or a function, because, for some reason, it was deemed acceptable for a function in a _functional_ library have to multiple return types. Again, whats the advantage of this?
The fact that the juxt wasnt even properly written enough to allow currying only bolsters my feeling that there was a lack of thought and quality control into allowing it in the 1st place.
Accepting and returning variadic functions is confusing (#2252, #2007, #1961, #2859) and doesn't really help with composing, because most Ramda functions take only 1 "input" argument. This behavior should be removed from most functions (curryN? and constructN? are obvious exceptions).
would you also prefer deprecating map in favor of o, mapObjIndexed, mapArray, mapFL, (future) mapIter and mapMap? If not, what's the difference?
I don't know what these functions are so I can't answer you on that. If you have some point of your own you want to make, feel free.
For me, I just want to have consistent behaviour amongst the _List_ and _Object_ functions, that is functions that can take and return lists, should always be in the _List_ section. Functions that can take and return objects, should always be in the _Object_ section. The Principle of Least Astonishment.
evolve and apparently the "improved" version of applySpec fail this simple standard and I feel this inconsistency hurts usability. I feel that the list manipulation of a documented evolve will still not be used as much as a separate each function, as its list usage is inherently less predictable to an end user.
It doesnt make sense to me that _Object_ functions can be more general than _List_ functions or vice versa. For me, they should be kept seperate.
I think generality should be actually be based on some theoretical foundation, like chain, ap, and map being able to take functions or lists.
API surface area should not be the priority. API _predictability_ should be.
@KayaLuken
would you also prefer deprecating map in favor of o, mapObjIndexed, mapArray, mapFL, (future) >>mapIter and mapMap? If not, what's the difference?
I don't know what these functions are so I can't answer you on that
R.map works on many things:
Ramda provides suitable
mapimplementations for Array and Object, so this function may be applied to [1, 2, 3] or {x: 1, y: 2, z: 3}. Dispatches to the map method of the second argument, if present.
The other maps would all be necessary if that would not be the case and one would enforce such a strict type siloing as you suggest with List and Object.
Wouldn't you at least agree that at least it's useful to have functions (like concat) to work both on Lists and Strings?
Not if those types have a general type. I don't know of _List_ and _Object_ being part of some general type.
Consistency is a key point for me. applySpec has apparently been changed to List -> List, but that doesnt seem to be part of a more general strategy and indeed makes the API _less_ consistent by itself, with merge still coercing an object and existing alongside concat, and so forth.
In any case, I think multiple tags should be introduced for functions that have multiple types. concat should be filterable under _List_ or _String,_ mergeAll should be filterable under _Object_ and _List_, and so forth.
If applySpec and evolve were at least fully documented as equal parts _List_ and _Object_ functions, (tags, examples, type signatures), then, awful names aside, that would at least be an improvement.
@KayaLuken:
For me, I just want to have consistent behaviour amongst the List and Object functions, that is functions that can take and return lists, should always be in the List section. Functions that can take and return objects, should always be in the Object section. The Principle of Least Astonishment.
Agreed. If our documentation template doesn't handle multiple categories, we should update it to do so. And then we should retag with multiple categories where appropriate.
Not if those types have a general type. I don't know of List and Object being part of some general type.
I think this is missing the point. map itself defines something like a type. In Haskell, this would be done by typeclasses, with some laws that cannot necessarily be enforced within the language, but which are what define map.
So if for some type T, parameterized over another type, we can define a function
map :: (a -> b) -> T a -> T b
such that these two things are guaranteed:
// for all x in T a
map (identity) (x) ≍ x
// for all x in T a; for all f :: b -> c; for all g :: a -> b
compose (map (f), map (g)) (x) ≍ map (compose (f, g)) (x),
then we want our map to be able to support that. Thus map works for several types Ramda supports: lists, objects, and functions, as well as for any Fantasy-Land Functor.
But that's all it takes, a type that can be parameterized over another type, and a function that fulfills the identity and composition laws. Arrays meet this criteria, as do Objects, Functions, Maybes, Eithers, Futures, and many others.
But we can't, say, define map over strings. Although they share some of the same notion of sequential values with lists, they are not parameterized over an arbitrary type, they hold only characters. (This doesn't mean that Ramda actually throws an error. But the garbage that comes in gives you back some garbage you might not want: map(x => 42, 'abc') //=> [42. 42. 42].) Note, though, that strings and lists do overlap in the use of concat; the only relevant law here is that concat (a, concat (b, c)) ≍ concat (concat (a, b), c)
The point of seeing a higher-level abstraction shared by multiple types is not that they share some sort of OOP hierarchy but that there is some useful, well-defined operation that applies to each of them. We don't always have formalized law that we actually point to, but when we see for instance that evolve is much more useful, especially given its recursive nature, if it does the appropriate thing for arrays as well as for plain objects, we extend it that way.
The trick is that there is a common abstraction shared by different types. It's for this reason that Ramda has long resisted requests for a foldObject function, or worse extending our reduce to apply to plain objects. We see objects as fundamentally unordered, but a fold requires a standard iteration order. But many things can be mapped with a lawful map function. And we can evolve an array just as easily as we can an object.
@GingerPlusPlus
It seems that as of 27.0 applySpec does not cover the very useful behaviour of
const each = curry((fns, x) => map(applyTo(x), fns))
Is it actually going to be changed? If not, then I would still very much like to have this function added in, preferably with the deplorable juxt going in the other direction.
@CrossEye
Is there going to be any movement on improving documentation by adding multiple tags for functions and adding a list -> list example for evolve?
I would be happy to help, but I dont have any access for doing this.
@CrossEye
If applySpec isnt going to have its behaviour changed, then I would also like to make a PR for the following family of functions
const each = curry((fns, x) => map(applyTo(x), fns))
and supplementary convenience functions
const also = curry(fn, x) => [id, fn(x)]
const fst = curry((fn, xs) => [fn(prop(0, xs)), prop(1, xs)])
const snd = curry((fn, xs) => [prop(0, xs), fn(prop(1, xs))])
Please let me know.
@LukenAgile42 (and are you also @KayaLuken or is the name just coincidence?):
There is no special access needed; the best way of getting such things done is by making a Pull Request with your proposed changes. For documentation changes, it's a PR here for any changes to the information about specific functions, or a PR in the documentation project for the overall design of the documentation.
I would like to see applySpec gain that behavior. We do this for assocPath for instance, handling data structures that are made up of arbitrary nested arrays and objects. I think it would make sense here to handle arrays as well.
Most helpful comment
@KayaLuken:
Agreed. If our documentation template doesn't handle multiple categories, we should update it to do so. And then we should retag with multiple categories where appropriate.
I think this is missing the point.
mapitself defines something like a type. In Haskell, this would be done by typeclasses, with some laws that cannot necessarily be enforced within the language, but which are what definemap.So if for some type T, parameterized over another type, we can define a function
such that these two things are guaranteed:
then we want our
mapto be able to support that. Thusmapworks for several types Ramda supports: lists, objects, and functions, as well as for any Fantasy-Land Functor.But that's all it takes, a type that can be parameterized over another type, and a function that fulfills the identity and composition laws. Arrays meet this criteria, as do Objects, Functions, Maybes, Eithers, Futures, and many others.
But we can't, say, define
mapover strings. Although they share some of the same notion of sequential values with lists, they are not parameterized over an arbitrary type, they hold only characters. (This doesn't mean that Ramda actually throws an error. But the garbage that comes in gives you back some garbage you might not want:map(x => 42, 'abc') //=> [42. 42. 42].) Note, though, that strings and lists do overlap in the use ofconcat; the only relevant law here is thatconcat (a, concat (b, c)) ≍ concat (concat (a, b), c)The point of seeing a higher-level abstraction shared by multiple types is not that they share some sort of OOP hierarchy but that there is some useful, well-defined operation that applies to each of them. We don't always have formalized law that we actually point to, but when we see for instance that
evolveis much more useful, especially given its recursive nature, if it does the appropriate thing for arrays as well as for plain objects, we extend it that way.The trick is that there is a common abstraction shared by different types. It's for this reason that Ramda has long resisted requests for a
foldObjectfunction, or worse extending ourreduceto apply to plain objects. We see objects as fundamentally unordered, but a fold requires a standard iteration order. But many things can bemapped with a lawfulmapfunction. And we can evolve an array just as easily as we can an object.