original ref #1725 -- I raise the question of whether there are standard terms for describing how a list of functions can be applied to a list of values? It is hard to express some kinds of function applications with just plain English.
Showing a symbolic output for the functions which are performing function applications could be another way to help explain behavior (if plain English is not enough). I am aware of juxt, converge, ap, scan, whose documentation could benefit from showing this.
See this Mathematica page for inspiring examples of what I mean.
As one example, FoldList is equivalent to Ramda's scan. Here its symbolic output expresses how it performs function applications to list values:


While I'm loathe to add anything to our bloated comment structure, (to my mind anything more than the sigs starts to feel bloated, so don't mind me) this seems like a good idea. I'm sure there are others that would benefit as well... compose, pipe, useWith leap to mind immediately.
I would assume that we could handle this with simply another JSDoc tag and a tweak to the template. @thurt, would you be interested in creating a PR? This would actually involve a PR in the code to add the actual @symb (or whatever) tags, and a second one in the docs project to show these in some reasonable way in the documentation.
While I'm loathe to add anything to our bloated comment structure, (to my mind anything more than the sigs starts to feel bloated, so don't mind me)
To reduce bloat, it may be possible to reduce/eliminate some text explanations with these. I think their purpose is simply to show the internal behavior of the function applications and express how the function's name correlates with that.
I can look into getting a PR started more seriously over the weekend. Also interested in hearing any other suggestions/ideas on these topics .
This is a really good idea. I'm trying to work through some examples. Let's take compose:
# current type signature, alphabet soup:
((y → z), (x → y), …, (o → p), ((a, b, …, n) → o)) → ((a, b, …, n) → z)
# let's make it a little more clear with this new notation
> R.compose(f, g, h)(a)
< f(g(h(a)))
Or ap:
# current signature:
[f] → [a] → [f a]
# I actually think it should have the following signature:
[a -> b] -> [a] -> [b]
# new notation
> R.ap([f, g], [a, b])
< [f a, f b, g a, g b]
Or unfoldr:
# current signature:
(a → [b]) → * → [b]
# new notation:
> R.unfold(f, a)
< [f a, f (f a), f (f (f a)), f (f (f (f a)))]
This seems like an intuitive way to express the behavior of these functions beyond just the examples
@benperez:
I wouldn't want it to replace the signatures. But it is a fantastic complement. And we could use several examples to easily show variants when necessary. Part of the reason compose has such a nasty signature is the polyadic return. This would help clarify those. As well as your example above, we could also include:
> R.compose(f, g, h, i)(a, b, c)
< f(g(h(i(a, b, c))))
# I actually think [ap] should have the following signature:
[a -> b] -> [a] -> [b]
Yes! it should
@thurt: I wouldn't want to lose much of the text, with examples like this to make it clearer, there are probably times where it can be significantly simplified.
@CrossEye: I don't think we should remove the signatures either, just augment them with this cool new notation.
Here's what I came up with so far, I'll add to it as I get more time to go through all the functions:
| Function | Input | Output |
| --- | --- | --- |
| ap | R.ap([f, g], [a, b]) | [f(a), f(b), g(a), g(b)] |
| apply | R.apply(f, [a, b, c]) | f(a, b, c) |
| applySpec | R.applySpec({foo: f, bar: {baz: g}})(a, b) | {foo: f(a, b), bar: {baz: g(a, b)}} |
| call | R.call(f)(a, b) | f(a, b) |
| compose | R.compose(f, g, h)(a) | f(g(h(a))) |
| composeK | R.composeK(f, g, h)(a) | R.chain(f, R.chain(g, h(a))) |
This has also helped me spot a few errors in our signatures and examples.
We should write f(a) rather than f a to avoid confusion with Hindley–Milner notation. [f(a), f(b), g(a), g(b)] has the advantage of being a valid JavaScript expression. :)
Really like the way you are laying things out @benperez .
| Function | Input | Output |
| --- | --- | --- |
| juxt | R.juxt([f, g])(a, b) | [f(a, b), g(a, b)] |
| scan | R.scan(f, a, [b, c]) | [a, f(a, b), f(f(a, b), c)] |
| zipWith | R.zipWith(f, [a1, a2], [b1, b2]) | [f(a1, b1), f(a2, b2)] |
| converge | R.converge(f, [g, h])(a, b) | f(g(a, b), h(a, b)) |
| useWith | R.useWith(f, [g, h])(a, b) | f(g(a), h(b)) |
It will be good to get one exhaustive list together that everyone agrees on.
Actually, I don't think there are too many more of these types of higher order functions :question: .
an aside: I am also interested in evaluating relationships between these functions. For instance, with this notation, it becomes easier to see how I might decompose into related functions.
Ex. converge can be related to juxt like
converge(f, [g, h])(a, b) == juxt([g, h])(a, b) + apply(f)
I try the same thinking on useWith and find that there may be a missing function:
useWith(f, [g, h])(a, b) == ??? + apply(f)
where
> ???([f, g], [a, b])
< [f(a), g(b)]
Is there a ramda function equivalent to ??? that I am missing
I don't think Ramda has a single such function, but zipWith(call) would fit the bill.
@CrossEye thank you! great idea
I noticed that ??? is a subset of ap, so you could define it in the frame of ap.
Also, ap can be represented by a matrix. In that representation, ??? is actually the diagonal.
a b c
f 0 _ _
g _ 4 _
h _ _ 8
> ap([f, g, h], [a, b, c])
< [f(a), _, _, _, g(b), _, _, _, h(c)]
> > > > > >
skip skip
Although the matrix-like nature of ap for lists seems to fall out of the definition, note that ap is significantly more generic. I don't think using this feature as fundamental is likely to lead to significant insights.
As an argument for simplicity, I would rather use ap over zipWith(call). I think it is a more intuitive path to start with all possibles and reduce from there using something like filter. A matrix is a great heuristic for many problems and it happens to be the best way to visualize all possible crosses between two lists.
Some ideas in regards to these notations
R.ap([f, g, ...], [a, b, ...]) I would say no.all, which naturally depends on the specifics of the input values and the predicate function to determine output. Another contra example could be the curry functions which depend on what function f is.| Function | Input | Output |
| --- | --- | --- |
| adjust | R.adjust(f, x, [a, b, c]) where x = 1 | [a, f(b), c] |
| unapply | R.unapply(f)(a, b) | f([a, b]) |
| unary | R.unary(f)(a, b, c) | f(a) |
| binary | R.binary(f)(a, b, c) | f(a, b) |
That's all for now :racehorse: :dash:
Damn, lost another mostly complete entry to the bit-bucket. Let me try again...
As an argument for simplicity, I would rather use
apoverzipWith(call). [ ... ] A matrix is a great heuristic for many problems and it happens to be the best way to visualize all possible crosses between two lists.
My point was that a matrix is not a good interpretation of ap. Although it happens to fall out of the definition, if you call it with a list of functions and a list of values, that comes across as a somewhat unusual case:
Future.of(add(5)).ap(Future.of(10)).fork(
x => console.log('rejected with ' + x),
x => console.log('resolved with ' + x)
); //=> logs "resolved with 15"
Just(multiply(6)).ap(Just(7)); //=> Just(42)
Just(multiply(6)).ap(Nothing()); //=> Nothing()
Right(add(1)).ap(Right(5)); //=> Right(6)
Right(add(1)).ap(Left('oops')); //=> Left('oops')
Although it's not as immediate, I would suggest using something starting from xProd if you're looking to emphasize the matrix-nature of a relationship.
@thurt:
1) I know adding more things to the documentation is a _big_ concern.
I don't think it is. I think I'm the curmudgeon who doesn't really like all the JSDoc tags cluttering up the source code. Adding things to the output documentation is not a concern at all, not if it helps make things clearer for some users. Perhaps some day I'll take a stab at moving the documentation somewhere outside the source files, but it's not high on my priority list.
2) In regards to the notation, is it worth writing dot, dot, dot to indicate that lists/arguments do not have to be fixed length ?
R.ap([f, g, ...], [a, b, ...])I would say no.
I agree. It's not worth it.
3) I would like to be more clear about which functions we are talking about but this has been a little difficult.
To me, it includes any of them for which this notation would add clarity. If using this notation makes it easier to understand what the function does, then it's worth including.
Your suggestions make a lot of sense! I will keep looking through the documentation for functions to add to our list. Ramda has a lot of them that I haven't explored yet.
Also, thanks for xprod. I haven't seen that one before. One neat thing I noticed is that xprod can be considered a special form of ap http://goo.gl/uE9TeT
xprod([a, b], [c, d]) == ap(ap([concat], [[a], [b]]), [[c], [d]])
So if I use xprod then I am stuck with concat. But if I use ap then I could vary the function application. Here's an example http://goo.gl/fJXuZo
ap(ap([
/*f*/pipe(String, replace(/\d+/, '$&+'), concat),
/*g*/add
], [1,2]))([3,4])
/*
ap one
1 2
-----------
f f1 f2
g g1 g2
ap two
3 4 3 4
----------------------------------
[f1,f2] f1,3 f1,4 f2,3 f2,4
[g1,g2] g1,3 g1,4 g2,3 g2,4
*/
I'm just exploring interesting relationships here, so I hope it can give some insights.
I can understand your concern about the ap behavior for lists being confused with the ap behavior for applicative functors, but isn't that the price paid for having polymorphic functions?
I'm just exploring interesting relationships here, so I hope it can give some insights.
That's fine. If you're thinking of including this in the output documentation, that would scare me. I like the simple example from above as you and @benperez have promoted.
I can understand your concern about the
apbehavior for lists being confused with theapbehavior for applicative functors, but isn't that the price paid for having polymorphic functions?
It is very much the same behavior, in the same way that Ramda's map constitutes Functor's map for lists. Even there, some people who are used to Array.prototype.map don't see the connection to fmap for an arbitrary container, but that conceptual jump is a lot simpler than the one between the matrix-like "apply all functions to all values and return a flattened list and "applies the wrapped function to the supplied wrapped value", which is at the heart of ap.
| Function | Input | Output |
| --- | --- | --- |
| bind | R.bind(f, obj) | f.bind(obj) |
| flip | R.flip(f)(a, b, c) | f(b, a, c) |
| forEach | | R.forEach(f, [a, b, c]) |[a, b, c]
| identity | R.identity(a) | a |
| invoker | R.invoker(2, 'method_name')(a, b, c) | c.method_name(a, b) |
These look good, although I think I would skip forEach, the red-headed step-child of our API, as it's really only there to help people perform side-effects. And this format doesn't really help explain it, to my mind.
| Function | Input | Output |
| --- | --- | --- |
| map | R.map(f, [a, b])R.map(f, { a: _, b: _ })R.map(f, functor_obj) | [f(a), f(b)]{ a: f(_), b: f(_) }functor_obj.map(f) |
| merge | R.merge({ a: _, b: _ }, { c: _ }) | { a: _, b: _, c: _ } |
| mergeAll | R.mergeAll([{ a: _ },{ b: _ },{ c: _ }]) | { a: _, b: _, c: _ } |
| mergeWithKey | mergeWithKey(f,{ a: _, values: x },{ b: _, values: y }) | { a: _, b: _,values: f('values', x, y) } |
| nAry | R.nAry(x, f)(a, b, c) | f(a) when x = 1f(a, b) when x = 2f(a, b, c) when x = 3 |
These two don't fit nicely in a table:
// mapAccum(f, acc, [a, b])
// * f must return an array of length 2 like [acc, outputVal]
[
f(f(acc, a)[0], b)[0],
[
f(acc, a)[1],
f(f(acc, a)[0], b)[1]
]
]
// mapAccumRight(f, acc, [a, b])
// * f must return an array of length 2 like [acc, outputVal]
[
f(f(acc, b)[0], a)[0],
[
f(acc, b)[1],
f(f(acc, b)[0], a)[1]
]
]
| Function | Input | Output |
| --- | --- | --- |
| nth | R.nth(n, [a, b]) | a when n = -2b when n = -1
a when n = 0b when n = 1 |
| nthArg | R.nthArg(n)(a, b) | see nth |
| partial | R.partial(f, [a, b])(c, d) | f(a, b, c, d) |
| partialRight | R.partialRight(f, [a, b])(c, d) | f(c, d, a, b) |
| pipe | R.pipe(f, g, h)(a, b) | h(g(f(a, b))) |
| pipeK | R.pipeK(f, g, h)(a, b) | R.chain(h, R.chain(g, f(a, b))) |
| reduce | R.reduce(f, acc, [a, b]) | f(f(acc, a), b) |
| repeat | R.repeat([a, b], x) | [] when x = 0[[a, b]] when x = 1[[a, b], [a, b]] when x = 2 |
| tap | R.tap(f, a) | a |
Edit note: tap can represent side-effects, so it could be considered a single value version of forEach
forEach(f, [a, b]) == [tap(f, a), tap(f, b)] == map(tap(f), [a, b])
These are great. I do have a few suggestions:
R.map(f, { a: 1, b: 2 }); //=> { a: f(1), b: f(2) }.nth and nAry and repeat The examples would be clearer as R.nth(-1, [a, b, c]); //=> b, R.nth(0, [a, b, c]); //=> a, R.nth(1, [a, b, c]); //=> b.reduce should probably show [a, b, c], not just [a, b]. Unfortunately, as painful as the output will look, the same is probably true of mapAccum/RightThat's a fascinating insight on tap/forEach. I'd never considered it before.
We should figure out a tag name for the docs. Any ideas?
:+1: @CrossEye -- good suggestions. I will have to get back to you later!
Dumping some exploratory notes on mapAccum:
I find mapAccum somewhat difficult to understand. Focusing especially on the iterator function which has an awkward output requirement: [new_acc, new_x]. The documentation description of "map + reduce" can be broken down, considering the iterator as a _coupling_ of two simpler iterator functions which are "2 values in/1 value out".
A decoupled version of mapAccum could look like
mapAccum(reducingFn, mappingFn, acc, list)
where
reducingFn(acc, x)
returns new_acc
mappingFn(acc, x)
returns new_x
These "reducing-style functions" are probably more familiar/comfortable to many people. There are many existing Ramda functions that follow the same pattern 2 in/1 out.
If I recouple reducingFn and mappingFn, I can see how that would produce the current iterator
fn(acc, x)
returns [new_acc, new_x]
My decoupled version has the advantage of being easier to represent the output, because you do not have to worry about selecting the right array indices.
[reduce(reducingFn, acc, [a, b]),
[
mappingFn(acc, a),
mappingFn(reducingFn(acc, a), b)
]
]
My decoupled version is mostly already defined by the implementation of mapAccum, which would determine the value that is calculated first and which values are passed into the iterator functions.
In contrast, the advantage of the coupled version is that your iterator implementation gets to choose which value to calculate firs and then also whether to use the first calculated value as an input to calculate the second value. There are many possible implementations you could write for the coupled iterator.
Based on these examples, you may see how coupling can give greater degree of freedom to implementations. While decoupling can improve a function's focus and intent.
These signatures were borrowed directly from Haskell. Your alternative is interesting. I don't have a strong feeling about keeping the existing one or changing it. I'm guessing that the reason for the original API is to keep what happens in an iteration localized to a single function. But, as you note, it does offer some additional flexibility as well. However, as you note, the decoupled functions can be much more focused.
I guess the strongest rationale for keeping the current behavior is that users who want the current flexibility can have it, and those who like the proposed simplicity can easily wrap their separate functions into one result. But it's not a particularly compelling argument. If you want to push for a change, feel free to try to convince people. (A separate issue would probably be appropriate.)
Yea I definitely agree, I see benefits to both versions so I can't vouch for either one in the context of the Ramda library. I meant to say that breaking it down into the decoupled version is what really helped me as a new user to understand it better.
Here's a step-by-step for a ramda version of my decoupled mapAccum.
I post it to help illustrate relationships to other ramda functions. I think it is incidentally a great illustration of scan, which shows itself as the base structure in this implementation.
_mapAccum(reducingFn, mappingFn, acc, [a, b])
// start
[reduce(reducingFn, acc, [a,b]),
[
mappingFn(acc, a),
mappingFn(reducingFn(acc, a), b)
]
]
// extract mappingFn -- apply is needed
[...,
map(apply(mappingFn), [[acc, a], [reducingFn(acc, a), b]])
]
// replace map with zipWith, this rearranges the lists -- apply no longer needed
[...,
zipWith(mappingFn, [acc, reducingFn(acc, a)], [a, b])
]
// replace list 1; it approximates scan.
// scan produces one extra element, so it is not exactly the same.
// however, zipWith is truncated by the smaller of the two lists,
// so we don't have to worry about the extra element from scan.
// list 2 is the input list
[...,
zipWith(mappingFn, scan(reducingFn, acc, [a, b]), [a, b])
]
// scan's extra element is actually = final_acc; so replace reduce with last(scan)
[ last(scan(reducingFn, acc, [a, b])),
zipWith(mappingFn, scan(reducingFn, acc, [a, b]), [a, b])
]
// scan could be stored in a variable so it wouldn't actually be calculated twice
| Function | Input | Output |
| --- | --- | --- |
| take | R.take(-1, [a, b])R.take(0, [a, b])R.take(1, [a, b])R.take(2, [a, b])R.take(3, [a, b]) | [a, b] (bug?)[][a][a, b][a, b] |
| times | R.times(f, -1)R.times(f, 0)R.times(f, 1)R.times(f, 2) | Error[][f(0)][f(0), f(1)] |
times is equivalent to mapping f to a list of natural numbers
times(f, 2) == map(f, [0,1])
times(f, 2) == map(f, take(2, [0,1,...Infinity]))
| Function | Input | Output |
| --- | --- | --- |
| transpose | R.transpose([[a], [b], [c]])R.transpose([[a, b], [c, d]])R.transpose([[a, b], [c]]) | [a, b, c][[a, c], [b, d]][[a, c], [b]] |
| pluck | R.pluck(0, [[a], [b], [c]])R.pluck(1, [[a], [b], [c]])R.pluck(1, [[a, b], [c]]) | [a, b, c][undefined, undefined, undefined][b, undefined] |
we can interpret pluck as a _single element_ of transpose. In other words, we could say that pluck returns one particular column of transpose.
transpose([[a, b], [c, d]]) == [
pluck(0)([[a, b], [c, d]]),
pluck(1)([[a, b], [c, d]])
]
transpose([[a, b], [c, d]]) == juxt(times(pluck, 2))([[a, b], [c, d]])
R.take(-1, [a, b]); //=> [a, b]. Is that a bug? Perhaps. I'm not sure what the best result here would be.take-times note. Interesting.I don't feel as though that really captures the spirit of pluck. This is a more likely example:
pluck('a', [{a: 1, b: 10}, {a: 3, b: 22}, {a: 7, b:18}]); //=> [1, 3, 7]
That's a good question on R.take(-1, [a, b]); //=> [a, b]. Is that a bug? Perhaps. I'm not sure what the best result here would be.
My initial thought is that take(-1) should throw an error. An alternative behavior could be "if negative, will take that many elements from the end"
I have seen take being used in other FP contexts where it is taking from a lazy evaluated list. In these cases, there is probably no predetermined length property on which to base the alternative behavior, hence I suggest it should error.
I don't feel as though that really captures the spirit of pluck
You're right I have shown a somewhat fringe comparison with transpose-pluck! I will have to add a case showing the behavior of pluck for array of objects.
I found a nice comparison showing juxt == reduce(ap). @CrossEye forgive me if you think this is an unnatural use of ap--however I haven't found any more suitable way to deconstruct decompose juxt with other Ramda functions?
// for one input value (a)
juxt([f, g])(a) == ap([f, g], [a])
// for two input values (a, b)
juxt([f, g])(a, b) == ap(ap([f, g], [a]), [b])
// ap is a reducing-style function 2 in/1 out
// You can see above that it begins to mimic the output form of reduce
// So we get
juxt([f, g])(a, b) == reduce(ap, [f, g], map(of, [a, b]))
I have to use map(of) at the last step because ap requires both its input values to be arrays. But that is ultimately an imperative instruction not directly related to the task.
Edit note: I see problem with the decomposed reduce(ap). it is only valid when all the functions in the list [f, g...] are curried to the same number. That means it is missing the feature in which juxt returns a function which is curried to the highest arity function in the list.
Instead of using ap like shown above, it would be nice to have a new Ramda function alt_map. While map applies a single function to a list of values, the alt_map would do somewhat the reverse: apply a list of functions to a single value.
see haskell discussion: http://stackoverflow.com/questions/27080626/haskell-apply-single-value-to-a-list-of-functions
see implementation example: http://goo.gl/Gpg7P1
Sorry for the slow response. Been somewhat ill recently.
I found a nice comparison showing
juxt == reduce(ap). @CrossEye forgive me if you think this is an unnatural use ofap--however I haven't found any more suitable way to decomposejuxtwith other Ramda functions?
I find such decompositions fascinating when they come across as natural. But there is no reason to assume that every function will have such a decomposition. What would be the decomposition of identity, map, or prop?
juxt, of course, will serve the role of your alt_map, but it's more flexible.
Hey glad you got feeling better. Thanks for your comments! I've been meaning to get back to these notations.
juxt, of course, will serve the role of your alt_map, but it's more flexible.
I don't think juxt is necessarily more flexible. But looking at this again, they are different enough so I shouldn't claim that they are really interchangeable. :frowning:
What would be the decomposition of identity, map, or prop?
Some ideas:
prop is just an alias to a specific syntax right ? prop('color')(myObj) == myObj['color']
I can understand why prop is necessary for point-free style, but there is no way to further decompose it because the language doesn't have any other syntax available to dereference a variable.
I will say the same about identity: only necessary for point-free, and I can't decompose it because there is no other syntax to do so.
However, I think map is different. It represents a sequence of processes which I can represent in the language syntax. So a decomposition of Array.prototype.map would look like it did before we had Array.prototype.map. That means a for loop and manual creation/filling of a new array.
| Function | Input | Output |
| --- | --- | --- |
| unfold | R.unfold(f, x) | [f(x)[0], f(f(x)[1])[0], f(f(f(x)[1])[1])[0], ...] |
| update | R.update(1, 11, [0,1,2]) | [0, 11, 2] (from documentation) |
| wrap | R.wrap(f, g)(a, b) | g(f, a, b) |
| xprod | R.xprod([1, 2], [3, 4]) | [[1, 3], [1, 4], [2, 3], [2, 4]] |
| zip | R.zip([1, 2], [3, 4]) | [[1, 3], [2, 4]] |
A few observations:
zipWith is closely related to zip + map
zip is the diagonal of xprod
transpose also happens to be the same as zip for equal length lists. However, transpose isn't limited to two lists. So transpose of three equal length lists will be the diagonal in 3 dimensions.
@thurt:
propis just an alias to a specific syntax right ?prop('color')(myObj)==myObj['color']
I can understand whypropis necessary for point-free style, but there is no way to further decompose it because the language doesn't have any other syntax available to dereference a variable.
My understanding is -- at least subtly -- different. The point of this function is not that it enables points-free coding. The point is that it enables _composition_. The property-access operators of the language, either . or [], do not compose under our usual composition. By creating a function that reifies this concept, we can now compose the accessing of properties along with other operations we want to do on our data. Now of course in Ramda, we choose the parameter ordering which makes this best suited to curry down to a useful single-argument function for these compositions, which also helps us make things points-free, but that's much less central than the fact that we _can_ compose this function. It's a very similar process for add and other functions which encapsulate basic bits of syntax.
@thurt:
zipis the diagonal ofxprod
I think this slightly misses the mark. And an example like this might show zip better:
R.zip([1, 2, 3], ['a', 'b', 'c']);
//=> [[1, 'a'], [2, 'b'], [3, 'c']]
Your transpose notes are close too, but you might need to talk about unapply(transpose) for the same sort of API.
The point is that it enables composition. The property-access operators of the language, either . or [], do not compose under our usual composition. By creating a function that reifies this concept, we can now compose the accessing of properties along with other operations we want to do on our data.
Ah yea I see what you mean. In other words, "lifting" operators into the function object enables us to write programs primarily as compositions of functions.
I want to clarify my previous statement:
a decomposition of Array.prototype.map would look like it did before we had Array.prototype.map. That means a for loop and manual creation/filling of a new array.
Two definitions for decomposition: One is specific to breaking down by functional decomposition. The other is generic, breaking down a system by whatever means is natural.
The imperative version of map that I described above is a natural predecessor and therefore a decomposition in the generic sense.
Now to address one of your previous comments:
there is no reason to assume that every function will have such a decomposition.
I think it is a good sign that identity/prop/map may not be further (functionally) decomposed. If I assume that the Ramda library has its own hierarchy, then these functions are at the base. Other Ramda functions which can be represented as compositions of the base functions are naturally higher up in the hierarchy.
If a natural hierarchy can be established and expressed, then it becomes a powerful approach for teaching the library to users. I think that is my approach anyways!
My second attempt at juxt...
I created two new functions which I refer to as "missing" Ramda functions.
const maxLength = (fs) => reduce(max, 0, pluck('length', fs))
const applyList = curry((fs, xs) => map(flip(apply)(xs), fs))
const juxt = (fs) => curryN(maxLength(fs), unapply(applyList(fs)))
With a github search, I was able to find the same pattern matching maxLength in allPass, anyPass, cond, converge, and applySpec. I am guessing the pattern for applyList may also be found to some extent in those functions.
Referring to my previous hierarchy suggestion, graphs like this may be used as documentation/teaching tool.
This is a very interesting, and probably very informative, approach. On the other hand, it seems like quite the overkill for code which at the moment is just implemented as converge(Array.of).
[juxt] is just implemented as converge(Array.of)
Yea I see what you mean on that point. juxt and converge are very very close, and it makes sense to define juxt like that.
At first I thought converge should be definable using juxt. But the curryN state is what really prevents that from being ideal. I can do it but there are two curryN's involved.
So that is the reason to use applyList; it performs the same looping operation of juxt/converge, but its signature expects that the arguments xs are immediately available rather than curried.
const _juxt = (fs) => curryN(maxLength(fs), applyList)
const _converge = curry((f, fs) => curryN(maxLength(fs), pipe(applyList, f)))
const alt_juxt = _converge(identity) // ok
note: I slightly modified applyList for convenience. see link http://goo.gl/2AYWGk
I have compiled and updated these notations here: https://github.com/thurt/ramda-analysis
Feel free to do a PR for any additions/changes or add more suggestions here. @CrossEye I think I got all of your suggestions added but let me know if you see anything else.
I'm sure there are still some things that are missing, but I think we are getting close to complete.
This is definitely looking great!
I don't know if you are looking to publish in this form. I would love to make this a part of our standard doc comments and have these appear on our documentation page. We'd need a new JSDoc tag name and a tweak to our template for that, but that should be easy.
I don't know if I missed these in earlier discussions, but I noticed a few for which I might suggest changes:
| Function | Input | Output |
| --- | --- | --- |
| bind | R.bind(f, o)(a, b) | f.call(o, a, b) |
| call | R.call(f, a, b) | f(a, b) |
| juxt | R.juxt([f, g, h])(a, b) | [f(a, b), g(a, b), h(a, b)] |
| merge | R.merge({ x: 1, y: 2 }, { y: 5, z: 3 }) | { x: 1, y: 5, z: 3 } |
| mergeWithKey | mergeWithKey(f,{ x: 1, y: 2 },{ y: 5, z: 3 }) | { x: 1,y: f('y', 2, 5),z: 3 } |
| zip | R.zip([a, b, c], [d, e, f]) | [[a, d], [b, e], [c, f]] |
| zipWith | R.zipWith(fn, [a, b, c], [d, e, f]) | [fn(a, d), fn(b, e), fn(c, f)] |
For juxt, zip, and zipWith, my feeling is just that extending by one more element makes things clearer. While R.bind(f, o); //=> f.bind(o) is accurate, it doesn't seem to explain enough. I think this version explains a bit more. call and apply should be parallel; I just felt that apply was slightly better. merge should show how shared keys are handled. And I think mergeWithKey was missing the point a bit.
I did raise a PR for these changes.
Makes sense thanks!
And good catch on R.call. I had that one written incorrectly.
I would love to start adding these into a JSDoc tag--@symb sounds good to me. I will be more busy this week on other things but I will try to get started on that in the next few days.
This shows display and positioning of symbolic notations in two example cards. mapAccum is a complicated example whereas adjust is more of a typical case. Are there other ideas for visual presentation before I start a PR?


This looks good to me, @thurt!
Just back from vacation. This looks very good to me too!
Thanks to everyone for all the input and help with getting this merged!
Some of my final thoughts:
The symbols aren't perfect yet but now that there is a system setup in the repo I think it will be much easier for the ramda team to decide how you want to use them going forward.
There are some functions where the symbols provide nearly identical information as the example code; its kind of like redundant info in a different form. Ideally, both the symbol and example code would provide distinctly different pieces of information for the user: the symbols would give the generic input/output info, while the examples would give a semi-realistic use case. Admittedly, some example codes seem totally sufficient at helping explain what the function does and hence a symbol may not be needed (or vice-versa).
While symbols help immensely for complex functions like converge and useWith, I don't think they make as much of an impact for simple functions like unary. So mileage does vary. At the least, the symbols provide another possible option for documenting functions!
Most helpful comment
We should write
f(a)rather thanf ato avoid confusion with Hindley–Milner notation.[f(a), f(b), g(a), g(b)]has the advantage of being a valid JavaScript expression. :)