Julia: Function chaining

Created on 27 Jan 2014  Â·  232Comments  Â·  Source: JuliaLang/julia

Would it be possible to allow calling any function on Any so that the value is passed to the function as the first parameter and the parameters passed to the function call on the value is added afterwards?
ex.

sum(a::Int, b::Int) -> a + b

a = 1
sum(1, 2) # = 3
a.sum(2) # = 3 or
1.sum(2) # = 3

Is it possible to indicate in a deterministic way what a function will return in order to avoid run time exceptions?

Most helpful comment

So our current list of various efforts etc
I think it is worth people checking these out, (ideally before opinioning, but w/e)
they are all slightly different.
(I am attempting to order chronologically).

Packages

Nonpackage Prototypes

Related:


Perhaps this should be editted in to one of the top posts.

updated: 2020-04-20

All 232 comments

The . syntax is very useful, so we aren't going to make it just a synonym for function call. I don't understand the advantage of 1.sum(2) over sum(1,2). To me it seems to confuse things.

Is the question about exceptions a separate issue? i think the answer is no, aside from wrapping a function body in try..catch.

The 1.sum(2) example is trivial (I also prefer sum(1,2)) but it's just to demonstrate that a function isn't owned per se by that type ex. 1 can be passed to a function with the first parameter being a Real, not just to functions that expect the first parameter to be an Int.

Edit: I might have misunderstood your comment. Dot functions will be useful when applying certain design patterns such as the builder pattern commonly used for configuration. ex.

validate_for(name).required().gt(3) 
# vs 
gt(required(validate_for(name)), 3) 

The exceptions I was just referring to is due to functions returning non-deterministic results (which is anyway bad practice). An example would be calling a.sum(2).sum(4) where .sum(2) sometimes return a String instead of an Int but .sum(4) expects an Int. I take it the compiler/runtime is already smart enough to evaluate such circumstances - which would be same when nesting the function sum(sum(1, 2), 4) - but the feature request would require extending said functionality to enforce type constraints on dot functions.

One of the use cases people seem to like is the "fluent interface". It's sometimes nice in OOP APIs when methods return the object, so you can do things like some_obj.move(4, 5).scale(10).display()

For me I think that this is better expressed as function composition, but the |> doesn't work with arguments unless you use anon. functions, e.g. some_obj |> x -> move(x, 4, 5) |> x -> scale(x, 10) |> display, which is pretty ugly.

One option to support this sort of thing would be if |> shoved the LHS as the first argument to the RHS before evaluating, but then it couldn't be implemented as a simple function as it is now.

Another option would be some sort of @composed macro that would add this sort of behavior to the following expression

You could also shift responsibility for supporting this to library designers, where they could define

function move(obj, x, y)
    # move the object
end

move(x, y) = obj -> move(obj, x, y)

so when you don't supply an object it does partial function application (by returning a function of 1 argument) which you could then use inside a normal |> chain.

Actually, the definition of |> could probably be changed right now to the
behavior your asking for. I'd be for it.

On Monday, January 27, 2014, Spencer Russell [email protected]
wrote:

One of the use cases people seem to like is the "fluent interface". It's
sometimes nice in OOP APIs when methods return the object, so you can do
things like some_obj.move(4, 5).scale(10).display()

For me I think that this is better expressed as function composition, but
the |> doesn't work with arguments unless you use anon. functions, e.g. some_obj
|> x -> move(x, 4, 5) |> x -> scale(x, 10) |> display, which is pretty
ugly.

One option to support this sort of thing would be if |> shoved the LHS as
the first argument to the RHS before evaluating, but then it couldn't be
implemented as a simple function as it is now.

Another option would be some sort of @composed macro that would add this
sort of behavior to the following expression

You could also shift responsibility for supporting this to library
designers, where they could define

function move(obj, x, y)
# move the object
end

move(x, y) = obj -> move(obj, x, y)

so when you don't supply an object it does partial function application
(by returning a function of 1 argument) which you could then use inside a
normal |> chain.

—
Reply to this email directly or view it on GitHubhttps://github.com/JuliaLang/julia/issues/5571#issuecomment-33408448
.

ssfrr I like the way you think! I was unaware of the function composition |>. I see there's recently been a similar discussion [https://github.com/JuliaLang/julia/issues/4963].

kmsquire I like the idea of extending the current function composition to allow you to specify parameters on the calling function ex. some_obj |> move(4, 5) |> scale(10) |> display. Native support would mean one less closure, but what ssfrr suggested is a viable way for now and as an added benefit it should also be forward compatible with the extended function composition functionality if it gets implemented.

Thanks for the prompt responses :)

Actually, @ssfrr was correct--it isn't possible to implement this as a simple function.

What you want are threading macros (ex. http://clojuredocs.org/clojure_core/clojure.core/-%3E). Unfortunate that @-> @->> @-?>> is not viable syntax in Julia.

Yeah, I was thinking that infix macros would be a way to implement this. I'm not familiar enough with macros to know what the limitations are.

I think this works for @ssfrr's compose macro:

Edit: This might be a little clearer:

import Base.Meta.isexpr
_ispossiblefn(x) = isa(x, Symbol) || isexpr(x, :call)

function _compose(x)
    if !isa(x, Expr)
        x
    elseif isexpr(x, :call) &&    #
        x.args[1] == :(|>) &&     # check for `expr |> fn`
        length(x.args) == 3 &&    # ==> (|>)(expr, fn)
        _ispossiblefn(x.args[3])  #

        f = _compose(x.args[3])
        arg = _compose(x.args[2])
        if isa(f, Symbol)
            Expr(:call, f, arg) 
        else
            insert!(f.args, 2, arg)
            f
        end
    else
        Expr(x.head, [_compose(y) for y in x.args]...)
    end
end

macro compose(x)
    _compose(x)
end
julia> macroexpand(:(@compose x |> f |> g(1) |> h('a',"B",d |> c(fred |> names))))
:(h(g(f(x),1),'a',"B",c(d,names(fred))))

If we're going to have this |> syntax, I'd certainly be all for making it more useful than it is right now. Using just to allow putting the function to apply on the right instead of the left has always seemed like a colossal waste of syntax.

+1. It's especially important when you are using Julia for data analysis, where you commonly have data transformation pipelines. In particular, Pandas in Python is convenient to use because you can write things like df.groupby("something").aggregate(sum).std().reset_index(), which is a nightmare to write with the current |> syntax.

:+1: for this.

(I'd already thought in suggesting the use of the .. infix operator for this (obj..move(4,5)..scale(10)..display), but the operator |> will be nice too)

Another possibility is adding syntactic sugar for currying, like
f(a,~,b) translating to x->f(a,x,b). Then |> could keep its current meaning.

Oooh, that would be a really nice way to turn any expression into a function.

Possibly something like Clojure's anonymous function literals, where #(% + 5) is shorthand for x -> x + 5. This also generalizes to multiple arguments with %1, %2, etc. so #(myfunc(2, %1, 5, %2) is shorthand for x, y -> myfunc(2, x, 5, y)

Aesthetically I don't think that syntax fits very well into otherwise very readable julia, but I like the general idea.

To use my example above (and switching to @malmaud's tilde instead of %), you could do

some_obj |> move(~, 4, 5) |> scale(~, 10) |> display

which looks pretty nice.

This is nice in that it doesn't give the first argument any special treatment. The downside is that used this way we're taking up a symbol.

Perhaps this is another place where you could use a macro, so the substitution only happens within the context of the macro.

We obviously can't do this with ~ since that's already a standard function in Julia. Scala does this with _, which we could also do, but there's a significant problem with figuring out what part of the expression is the anonymous function. For example:

map(f(_,a), v)

Which one does this mean?

map(f(x->x,a), v)
map(x->f(x,a), v)
x->map(f(x,a), v)

They're all valid interpretations. I seem to recall that Scala uses the type signatures of functions to determine this, which strikes me as unfortunate since it means that you can't really parse Scala without knowing the types of everything. We don't want to do that (and couldn't even if we wanted to), so there has to be a purely syntactic rule to determine which meaning is intended.

Right, I see your point on the ambiguity of how far to go out. In Clojure the whole expression is wrapped in #(...) so it's unambiguous.

In Julia is it idiomatic to use _ as don't-care value? Like x, _ = somfunc() if somefunc returns two values and you only want the first one?

To solve that I think we'd need macro with an interpolation-like usage:

some_obj |> @$(move($, 4, 5)) |> @$(scale($, 10)) |> display

but again, I think it's getting pretty noisy at that point, and I don't think that @$(move($, 4, 5)) gives us anything over the existing syntax x -> move(x, 4, 5), which is IMO both prettier and more explicit.

I think this would be a good application of an infix macro. As with #4498, if whatever rule defines functions as infix applied to macros as well, we could have a @-> or @|> macro that would have the threading behavior.

Ya, I like the infix macro idea, although a new operator could just be introduced for this use in lieu of having a whole system for inplace macros. For example,
some_obj ||> move($,4,5) ||> scale($, 10) |> disp
or maybe just keep |> but have a rule that
x |> f implicitly transforms into x |> f($):
some_obj |> scale($,10) |> disp

Folks, it all really looks ugly: |> ||> etc.
So far I found out Julia's syntax to be so clear that these things discussed above doesn't look so pretty if compared to anything else.

In Scala it's probably the worst thing - they have so much operators like ::, :, <<, >> +:: and so on - it just makes any code ugly and not readable for one without a few months of experience in using the language.

Sorry to hear you don't like the proposals, Anton. It would be helpful if you made an alternative proposal.

Oh sorry, I am not trying to be unkind. And yes - critics without proposals
are useless.

Unfortunately I am not a scientist constructing languages so I just do not
know what to propose... well , except making methods optionally owned by
objects as it is in some languages.

I like the phrase "scientist constructing languages" - it sounds much more grandiose than numerical programmers sick of Matlab.

I feel that almost every language has a way to chain functions - either by repeated application of . in OO languages, or special syntax just for that purpose in more functional languages (Haskell, Scala, Mathematica, etc.). Those latter languages also have special syntax for anonymous function arguments, but I don't think Julia is really going to go there.

I'll reiterate support for Spencer's proposal - x |> f(a) get translated into f(x, a), very analogously to how do blocks works (and it reinforces a common theme that the first argument of a function is privileged in Julia for syntactic sugar purposes). x |> f is then seen as short-hand for x |> f(). It's simple, doesn't introduce any new operators, handles the vast majority of cases that we want function chaining for, is backwards-compatible, and fits with existing Julia design principles.

I also think that is the best proposal here, main problem being that it seems to preclude defining |> for things like I/O redirection or other custom purposes.

Just to note, . is not a special function chaining syntax, but it happens to work that way if the function on the left returns the object it just modified, which is something that the library developer has to do intentionally.

Analogously, in Julia a library developer can already support chaining with |> by defining their functions of N arguments to return a function of 1 argument when given N-1 arguments, as mentioned here

That would seem to cause problems if you _want_ your function to support variable number of args, however, so having an operator that could perform the argument stuffing would be nice.

@JeffBezanson, it seems that this operator could be implemented if there was a way to do infix macros. Do you know if there's an ideological issue with that, or is just not implemented?

Recently, ~ was special-cased so that it quoted its arguments and calls
the macro @~ by default. |> could be made to do the same thing.

Of course, in a few months, someone will ask for <| to do the same...

On Thursday, February 6, 2014, Spencer Russell [email protected]
wrote:

Just to note, . is not a special function chaining syntax, but it happens
to work that way if the function on the left returns the object it just
modified, which is something that the library developer has to do
intentionally.

Analogously, in Julia a library developer can already support chaining
with |> by defining their functions of N arguments to return a function
of 1 argument when given N-1 arguments, as mentioned herehttps://github.com/JuliaLang/julia/issues/5571#issuecomment-33408448

That would seem to cause problems if you _want_ your function to support
variable number of args, however, so having an operator that could perform
the argument stuffing would be nice.

@JeffBezanson https://github.com/JeffBezanson, it seems that this
operator could be implemented if there was a way to do infix macros. Do you
know if there's an ideological issue with that, or is just not implemented?

—
Reply to this email directly or view it on GitHubhttps://github.com/JuliaLang/julia/issues/5571#issuecomment-34374347
.

right, I definitely wouldn't want this to be a special case. Handling it in your API design is actually not that bad, and even the variable arguments limitation isn't too much of an issue if you have type annotations to disambiguate.

function move(obj::MyType, x, y, args...)
    # do stuff
    obj
end

move(args...) = obj::MyType -> move(obj, args...)

I think this behavior could be handled by a @composable macro that would handle the 2nd declaration.

The infix macro idea is attractive to me in the situation where it would be unified with declaring infix functions, which is discussed in #4498.

Why Julia creators are so much against allowing objects to contain their own methods? Where could I read more about that decision? Which thoughts and theory are behind that decision?

@meglio a more useful place for general questions is the mailing list or the StackOverflow julia-lang tag. See Stefan's talk and the archives of the users and dev lists for previous discussions on this topic.

Just chiming in, to me the most intuitive thing is to have some placeholder be replaced by the
value of the previous expression in the sequence of things you're trying to compose, similar to clojure's as-> macro. So this:

@as _ begin
    3+3
    f(_,y)
    g(_) * h(_,z)
end

would be expanded to:

g(f(3+3,y)) * h(f(3+3,y),z)

You can think of the expression on the previous line "dropping down" to fill the underscore hole on the next line.

I started sketching a tiny something like this last quarter in a bout of finals week procrastination.

We could also support a oneliner version using |>:

@as _ 3+3 |> f(_,y) |> g(_) * h(_,z)

@porterjamesj, I like that idea!

I agree; that is pretty nice, and has an appealing generality.
On Feb 7, 2014 3:19 PM, "Kevin Squire" [email protected] wrote:

@porterjamesj https://github.com/porterjamesj, I like that idea!

Reply to this email directly or view it on GitHubhttps://github.com/JuliaLang/julia/issues/5571#issuecomment-34497703
.

I like @porterjamesj's idea not only because is a breath of fresh air, but because it seems much more flexible than previous ideas. We're not married to only using the first argument, we have free reign of the choice of intermediate variable, and this also seems like something that we can implement right now without having to add new syntax or special-cases to the language.

Note that in Julia, because we don't do much of the obj.method(args...) pattern, and instead do the method(obj, args...) pattern, we tend not to have methods that return the objects they operate on for the express purpose of method chaining. (Which is what jQuery does, and is fantastic in javascript). So we don't save quite as much typing here, but for the purpose of having "pipes" setup between functions, I think this is really nice.

Given that clojure's -> and ->> are just special cases of the above, and fairly common, we could probably implement those pretty easily too. Although the question of what to call them is a bit tricky. Maybe @threadfirst and @threadlast?

I like the idea of this being a macro too.

Isn't it better if the expansion, following the example, is something like

tmp = 3+3; tmp = f(tmp); return h(tmp, z)

to avoid multiple calls to the same operation? (Maybe that was already implicit in @porterjamesj's idea)

Another suggestion: would it be possible that the macro expands the shortcuts f to f(_) and f(y) to f(_,y)? Maybe it will be too much, but I think that then we have an option to use placeholder only when needed... (the shortcuts must, however, be allowed only on alone function calls, not on expressions like the g(_) * h(_,z) above)

@cdsousa the point about avoiding multiple calls is a good one. The clojure implementation uses sequential let bindings to achieve this; I'm not sure if we can get away with this though because I don't know enough about the performance of our let.

So is the @as macro using line breaks and => as split points to decide what's the substitution expression and what's getting substituted?

let performance is good; now it can be as fast as a variable assignment when possible, and also pretty fast otherwise.

@ssfrr in my toy implementation is just filters out all the linebreak related nodes that the parser inserts (N.B., I don't really understand all these, it would probably be good to have documentation on them in the manual) and then reduces the substitution over the list of expressions that remains. Using let would be better though I think.

@cdsousa:

Another suggestion: would it be possible that the macro expands the shortcuts f to f(_) and f(y) to f(_,y)

f to f(_) makes sense to me. For the second, I'm of the opinion that explicitly specifying the location is better, since reasonable people could argue that either f(_,y) or f(y,_) is more natural.

Given that clojure's -> and ->> are just special cases of the above, and fairly common, we could probably implement those pretty easily too. Although the question of what to call them is a bit tricky. Maybe @threadfirst and @threadlast?

I think specifying the location explicity with f(_,y...) or f(y..., _) allows the code to be quite understandable. While the extra syntax (and operators) make sense in Clojure, we don't really have additional operators available, and I think the additional macros would generally make the code less clear.

So is the @as macro using line breaks and => as split points to decide what's the substitution expression and what's getting substituted?

I would think it more natural to use |> as a split point, since it is already used for pipelining

Just so you know, there's an implementation of the threading macro in Lazy.jl, which would lets you write, for example:

@>> range() map(x->x^2) filter(iseven)

On the plus side, it doesn't require any language changes, but it gets a bit ugly if you want to use more than one line.

I could also implement @as> in Lazy.jl if there's interest. Lazy.jl now has an @as macro, too.

You can also do something like this (though using a Haskell-like syntax) with Monads.jl (note: it needs to be updated to use current Julia syntax). But I suspect that a specialized version for just argument threading should be able to avoid the performance pitfalls the general approach has.

Lazy.jl looks like a very nice package, and actively maintained. Is there a compelling reason this needs to be in Base?

How will function chaining work with functions returning multiple values?
What would be the result of chaining eg.:

function foo(a,b)
    a+b, a*b   # x,y respectively
end

and bar(x,z,y) = x * z - y be?

Wouldn't it require a syntax like bar(_1,z,_2) ?

Throwing in another example:

data = [2.255, 3.755, 6.888, 7.999, 9.001]

The clean way to write: log(sum(round(data))) is data|>round|>sum|>log
But if we wanted to do a base 2 log, and wanted to round to 3 decimals,
then: we can only use the first form:
log(2,sum(round(data,3)))

But ideally we would like to be able to do:
data|>round(_,3)|>sum|>log(2,_)
(or similar)

I have made a prototype for how I suggest it should work.
https://github.com/oxinabox/Pipe.jl

It does not solve @gregid's point, but I am working on that now.
It also does not handle the need to expand the arguments

It is similar to @one-more-minute 's Lazy.jl threading macros but keeps the |> symbol for readability (personal preference).

I'll slowly make it into a package, perhaps, at some point

One more option is:

data |>   x -> round(x,2)  |> sum |>  x -> log(2,x)

Although longer than log(2,sum(round(data,2))) this notation sometimes helps readability.

@shashi that is not bad, didn't think of that,
I think generally too verbose to be easily readable

https://github.com/oxinabox/Pipe.jl Now does solve @gregid's problem.
Though if you ask for both _[1] and _[2] it does this by making multiple calls to the subsitution
Which I am not certain is the most desirable behavour.

As an outsider, I think the pipeline operator would benefit from adapting F#'s treatment of it.
Granted, F# has currying, but some magic could perhaps be done on the back end to have it not require that. Like, in the implementation of the operator, and not the core language.

This would make [1:10] |> map(e -> e^2) result in [1, 4, 9, 16, 25, 36, 49, 64, 81, 100].

Looking back, @ssfrr alluded to this, but the obj argument in their example would be automatically given to map as the second argument in my example, thus saving programmers from having to define their functions to support it.

What do you propose that it mean?

On Jun 5, 2015, at 5:22 PM, H-225 [email protected] wrote:

As an outsider, I think one of the better ways to do this would be to adapt F#'s treatment of it.
Granted, F# has currying, but some magic could perhaps be done on the back end to have it not require that. Like, in the implementation of the operator, and not the core language.

This would make [1:10] |> map(e -> e^2) result in [1, 4, 9, 16, 25, 36, 49, 64, 81, 100].

Personally, I think that it nice and clear without being too verbose.

Obviously, one could write result = map(sqr, [1:10]), but they why have the pipeline operator at all?
Perhaps there is something I'm missing?

—
Reply to this email directly or view it on GitHub.

@StefanKarpinski
Basically, have the operator work like either:

  • x |> y(f) = y(x, f)
  • x |> y(f) = y(f, x)

Perhaps have an interface pattern that any function to be used with the operator takes the data to operate on as the either the first or last argument, depending on which of the above is selected to be that pattern.
So, for the map function as an example, map would either be map(func, data) or map(data, func).

Is that any clearer?

Lazy.jl looks like a very nice package, and actively maintained. Is there a compelling reason this needs to be in Base?

I think this is the important question here.

The reason this may be desirable in base is 2 fold:

1.) We may want to encourage pipelining as being the Julian Way -- arguments can be made that it is more readable
2.) things like Lazy.jl, FunctionalData.jl, and my own Pipe.jl require a macro to wrap the expression it is to act on -- which makes it less readable.

I feel the answer may lay in having Infix Macros.
And defining |> as such.

I'm not certain having |>, (or their cousin the do block) belong in core at all.
But the tools don't exist to define them outside of the parser.

The ability to have that sort of pipelining syntax seems very nice. Could just that be added to Base, i.e. x |> y(f) = y(f, x) part, that Lazy.j, FunctionalData.jl, and Pipe.jl could use? :+1:

Having looked at code that uses the various implementations of this out in packages, I personally find it unreadable and very much un-Julian. The left-to-right pipeline pun doesn't help readability, it just makes your code stand out as backwards from the rest of the perfectly normal code that uses parentheses for function evaluation. I'd rather discourage a syntax that leads to 2 different styles where code written in either style looks inside-out and backwards relative to code written in the other. Why not just settle on the perfectly good syntax we already have and encourage making things look more uniform?

@tkelman
Personally, I see it from a somewhat utilitarian point of view.
Granted, maybe if you're doing something simple then it isn't necessary, but if you're writing a function say, that does something fairly complicated, or long winded, (off the top of my head: data manipulation e.g.), then I think that's where pipeline syntax shines.

I understand what you mean though; it _would_ be more uniform if you had one function call syntax for everything. Personally though, I think it's better to make it easier to write [complicated] code that can be easily understood. Granted, you have to learn the syntax and what it means, but, IMHO, |> is no harder to grasp than how to call a function.

@tkelman I'd look at it from a different point of view. Obviously, there are people who prefer that style of programming. I can see that maybe you'd want to have a consistent style for the source code to Base, but this is only about added the parser support for their preferred style of programming _their_ Julia applications. Do julians really want to try to dictate or otherwise stifle something other people find beneficial?
I've found pipelining stuff together very useful in Unix, so even though I've never used a programming language that enabled it in the language, I'd at least give it the benefit of the doubt.

We do have |> as a function piping operator, but there are implementation limitations to how it's currently done that make it pretty slow at the moment.

Piping is great in a unix shell where everything takes text in and text out. With more complicated types and multiple inputs and outputs, it's not as clear-cut. So we have two syntaxes, but one makes a lot less sense in the MIMO case. Parser support for alternate styles of programming or DSL's is not usually necessary since we have powerful macros.

OK, thanks, I was going by @oxinabox's comment:

But the tools don't exist to define them outside of the parser.

Is it understood what would be done to remove the implementation limitations you refered to?

Some of the earlier suggestions could potentially be implemented by making |> parse its arguments as a macro instead of as a function. The former command-object piping meaning of |> has been deprecated, so this might actually be freed up to do something different with, come 0.5-dev.

However this choice reminds me quite a bit of the special parsing of ~ which I feel is a mistake for reasons I've stated elsewhere.

Parsing ~ is just insane, it's a function in base. Using _, _1, _2, seem _more_ reasonable (esp. if you raise if these variables are defined elsewhere in scope). Still until we have more efficient anonymous functions this seems like it's not going to work...

implemented by making |> parse its arguments as a macro instead of as a function

Unless you do that!

Parsing ~ is just insane, it's a function in base

It's a unary operator for the bitwise version. Infix binary ~ parses as a macro, ref https://github.com/JuliaLang/julia/issues/4882, which I think is a strange use of an ascii operator (https://github.com/JuliaLang/julia/pull/11102#issuecomment-98477891).

@tkelman

So we have two syntaxes, but one makes a lot less sense in the MIMO case.

3 Syntaxes. Kind of.
Pipe in, Normal function call and Do-blocks.
Debatable even 4, since Macros use a different convention as well.


For me,
the Readorder (ie left to right) == Application order, makes, for SISO function chains, a lot clearer.

I do a lot of code like (Using iterators.jl, and pipe.jl):

  • loaddata(filename) |> filter(s-> 2<=length(s)<=15, _) |> take!(150,_) |> map(eval_embedding, _)
  • results |> get_error_rate(desired_results, _) |> round(_,2)

For SISO, it;s better (for my personal preference), for MIMO it is not.

Julia seems to have already settled towards there being multiple correct ways to do things.
Which I am not 100% sure is a good thing.

As I said I would kind of like Pipe and Do blocks moved out of the main language.

Do-blocks have quite a few very helpful use cases, but it has annoyed me a little that they have to use the first input as the function, doesn't always fit in quite right with the multiple dispatch philosophy (and neither would pandas/D style UFCS with postfix data.map(f).sum(), I know it's popular but I don't think it can be combined effectively with multiple dispatch).

Piping can probably be deprecated quite soon, and left to packages to use in DSL's like your Pipe.jl.

Julia seems to have already settled towards there being multiple correct ways to do things.
Which I am not 100% sure is a good thing.

It's related to the question of whether or not we can rigorously enforce a community-wide style guide. So far we haven't done much here, but for long-term package interoperability, consistency, and readability I think this will become increasingly important as the community grows. If you're the only person who will ever read your code, go nuts and do whatever you want. If not though, there's value in trading off slightly worse (in your own opinion) readability for the sake of uniformity.

@tkelman @oxinabox
I have yet to find a clear reason why it should not be included in the language, or indeed in the "core" packages. [e.g: Base]
Personally, I think making |> a macro might be the answer.
Something _like_ this perhaps? (I'm not a master Julia programmer!)

macro (|>) (x, y::Union(Symbol, Expr))
    if isa(y, Symbol)
        y = Expr(:call, y) # assumes y is callable
    end
    push!(y.args, x)
    return eval(y)
end

Under Julia v0.3.9, I was unable to define it twice -- once with a symbol, and once with an expression; my [limited] understanding of Union is that there is performance hit from using it, so I'm guessing that would be something to rectify in my toy example code.

Of course, there is a problem with the use syntax for this.
For example, to run the equivalent of log(2, 10), you have to write @|> 10 log(2), which isn't desirable here.
My understanding is that you'd have to be able to somehow mark functions/macros as "infixable", as it were, such that you could then write it thus: 10 |> log(2). (Correct if wrong!)
Contrived example, I know. I can't think of a good one right now! =)

It's also worth pointing out one area I have not covered in my example...
So e.g:

julia> for e in ([1:10], [11:20] |> zip) println(e) end
(1,11)
(2,12)
(3,13)
(4,14)
(5,15)
(6,16)
(7,17)
(8,18)
(9,19)
(10,20)

Again - contrived example, but hopefully you get the point!
I did some fiddling, but as of writing this I was unable to fathom how to implement that, myself.

On Jun 9, 2015, at 9:37 PM, H-225 [email protected] wrote:

I have yet to find a clear reason why it should not be included in the language

This is the wrong mental stance for programming language design. The question must by "why?" rather than "why not?" Every feature needs a compelling reason for its inclusion, and even with a good reason, you should think long and hard before adding anything. Can you live without it? Is there a different way to accomplish the same thing? Is there a different variation of the feature that would be better and more general or more orthogonal to the existing features? I'm not saying this particular idea couldn't happen, but there needs to be a far better justification than "why not?" with a few examples that are no better than the normal syntax.

The question must by "why?" rather than "why not?"

+1_000_000

Indeed.
See this fairly well known blog post:
Every feature starts with -100 points.
It needs to make a big improvement to be worth adding to the language.

FWIW, Pyret (http://www.pyret.org/) went through this exact discussion a few months ago. The language supports a "cannonball" notation which originally functioned much the way that people are proposing with |>. In Pyret,

[list: 1, 2, 3, 5] ^ map(add-one) ^ filter(is-prime) ^ sum() ^ ...

So, the cannonball notation desugared into adding arguments to the functions.

It didn't take long before they decided that this syntax was too confusing. Why is sum() being called without any arguments? etc. Ultimately, they opted for an elegant currying alternative:

[list: 1, 2, 3, 5] ^ map(_, add-one) ^ filter(_, is-prime) ^ sum() ^ ...

This has the advantage of being more explicit and simplifies the ^ operator to a simple function.

Yes, that seems much more reasonable to me. It is also more flexible than currying.

@StefanKarpinski I'm a little confused. Did you mean to say more flexible then chaining (not currying)? After all Pyret's solution was to simply use currying, which is more general than chaining.

Maybe, if we modify the |> syntax a little bit (I really don't know how hard it is to implement, maybe it conflicts with | and >), we could set something flexible and readable.

Defining something like

foo(x,y) = (y,x)
bar(x,y) = x*y

We would have:

randint(10) |_> log(_,2) |> sum 
(1,2) |_,x>  foo(_,x)   |x,_>   bar(_,2) |_> round(_, 2) |> sum |_> log(_, 2)

In other words, we would have an operator like |a,b,c,d> where a, b, c and d would get the returned values of the last expression (in order) and use it in placeholders inside the next one.

If there are no variables inside |> it would work as it works now. We could also set a new stardard: f(x) |> g(_, 1) would get all values returned by f(x) and associate with the _ placeholder.

@samuela, what I meant was that with currying you can only omit trailing arguments, whereas with the _ approach, you can omit any arguments and get an anonymous function. I.e. given f(x,y) with currying you can do f(x) to get a function that does y -> f(x,y), but with underscores you can do f(x,_) for the same thing but also do f(_,y) to get x -> f(x,y).

While I like the underscore syntax, I'm still not satisfied with any proposed answer to the question of how much of the surrounding expression it "captures".

what do you do if a function returns multiple results? Would it have to pass a tuple to the _ position? Or could there be a syntax to split it up on the fly? May be a stupid question, if so, pardon!

@StefanKarpinski Ah, I see what you mean. Agreed.

@ScottPJones the obvious answer is to allow ASCII art arrows:
http://scrambledeggsontoast.github.io/2014/09/28/needle-announce/

@simonbyrne That looks even worse than programming in Fortran IV on punched cards, like I did in my misspent youth! Just wondered if some syntax like _1, _2, etc. might allow pulling apart a multiple return, or is that just a stupid idea on my part?

@simonbyrne That's brilliant. Implementing that as a string macro would be an amazing GSoC project.

Why is sum() being called without any arguments?

I think that the implicit argument is also one of the more confusing things about do notation, so it would be nice if we could utilise the same convention for that as well (though I realise that it is much more difficult, as it is already baked into the language).

@simonbyrne You don't think it could be done in an unambiguous way? If so, that's something I think is worth breaking (the current do notation), if it can be made more logical, more general, and consistent with chaining.

@simonbyrne Yeah, I totally agree. I understand the motivation for the current do notation but I feel strongly that it doesn't justify the syntactical gymnastics.

@samuela regarding map(f, _) vs just map(f). I agree that some magic desugaring would be confusing, but I do think map(f) is something that should exist. It wouldn't require and sugar just add a simple method to map.
eg

map(f::Base.Callable) = function(x::Any...) map(f,x...) end

i.e. map takes a function and then returns a function that works on things that are iterable (more or less).

More generally I think we should lean towards functions that have additional "convenience" methods, rather than some sort of convention that |> always maps data to the first argument (or similar).

In the same vein there could be a

type Underscore end
_ = Underscore()

and a general convention that functions should/could have methods that take underscores in certain arguments, and then return functions that take fewer arguments. I'm less convinced that this would be a good idea, as one would need to add 2^n methods for each function that takes n arguments. But it's one approach. I wonder if it would be possible to not have to explicitly add so many methods but rather hook into the method look up, so that if any arguments are of type Underscore then the appropriate function is returned.

Anyway, I definitely think having a version of map and filter that just take a callable and return a callable makes sense, the thing with the Underscore may or may not be workable.

@patrickthebold
I would imagine that x |> map(f, _) => x |> map(f, Underscore()) => x |> map(f, x)​, as you propose, would be the simplest way to implement map(f, _), right? - just have _ be a special entity which you'd program for?
​​
Though, I'm uncertain if that would be better than having it automatically inferred by Julia-- presumably using the |> syntax-- rather than having to program it yourself.

Also, regarding your proposal for map - I kinda like it. Indeed, for the current |> that would be quite handy. Though, I imagine it would be simpler better to just implement automatic inferencing of x |> map(f, _) => x |> map(f, x) instead?

@StefanKarpinski Makes sense. Hadn't thought of it quite like that.

Nothing I said would be tied to |> in any way. What I meant regarding the _ would be for example to add methods to < as such:

<(_::Underscore, x) = function(z) z < x end
<(x, _::Underscore) = function(z) x < z end

But again I think this would be a pain unless there was a way to automatically add the appropriate methods.

Again, the thing with the underscores is separate that adding the convenience method to map as outlined above. I do think both should exist, in some form or another.

@patrickthebold Such an approach with a user-defined type for underscore, etc would place a significant and unnecessary burden on the programmer when implementing functions. Having to list out all 2^n of

f(_, x, y) = ...
f(x, _, y) = ...
f(_, _, y) = ...
...

would be very annoying, not to mention inelegant.

Also, your proposition with map would I suppose provide a workaround syntax for map(f) with basic functions like map and filter but in general it suffers from the same complexity issue as the manual underscore approach. For example, for func_that_has_a_lot_of_args(a, b, c, d, e) you'd have to go through the grueling process of typing out each possible "currying"

func_that_has_a_lot_of_args(a, b, c, d, e) = ...
func_that_has_a_lot_of_args(b, c, d, e) = ...
func_that_has_a_lot_of_args(a, b, e) = ...
func_that_has_a_lot_of_args(b, d, e) = ...
func_that_has_a_lot_of_args(a, d) = ...
...

And even if you did, you'd still be faced with an absurd amount of ambiguity when calling the function: Does func_that_has_a_lot_of_args(x, y, z) refer to the definition where x=a,y=b,z=c or x=b,y=d,z=e, etc? Julia would discern between them with runtime type information but for the lay-programmer reading the source code it would be totally unclear.

I think the best way to get underscore currying done right is to simply incorporate it into the language. It would be a very straightforward change to the compiler after all. Whenever an underscore appears in a function application, just pull it out to create a lambda. I started looking into implementing this a few weeks ago but unfortunately I don't think I'll have enough free time in the next few weeks to see it through. For someone familiar with the Julia compiler though it would probably take no more than an afternoon to get things working.

@samuela
Can you clarify what you mean by, "pull it out to create a lambda"? - I'm curious. I too have wondered how that may be implemented.

@patrickthebold
Ah - I see. Presumably you could then use such a thing like this: filter(_ < 5, [1:10]) => [1:4] ?
Personally, I would find filter(e -> e < 5, [1:10]) easier to read; more consistent - less hidden meaning, though I grant you, it is more concise.

Unless you have an example where it really shines?

@samuela

Also, your proposition with map would I suppose provide a workaround syntax for map(f) with basic functions like map and filter but in general it suffers from the same complexity issue as the manual underscore approach.

I wasn't suggesting that this be done in general, only for map and filter, and possibly a few other places where it seems obvious. To me, that's how map should work: take in a function and return a function. (pretty sure that's what Haskell does.)

would be very annoying, not to mention inelegant.

I think we are in agreement on that. I'd hope there would be a way to add something to the language to handle method invocations where some arguments are of type Underscore. Upon further thought, I think it boils down to having a special character automatically expand into a lambda, or have a special type that automatically expands into a lambda. I don't feel strongly either way. I can see pluses and minuses to both approaches.

@H-225 yes the underscore thing is just a syntactic convenience. Not sure how common it is, but Scala certainly has it. Personally I like it, but I think it's just one of those style things.

@H-225 Well, in this case I think a compelling and relevant example would be function chaining. Instead of having to write

[1, 2, 3, 5]
  |> x -> map(addone, x)
  |> x -> filter(isprime, x)
  |> sum
  |> x -> 3 * x
  |> ...

one could simply write

[1, 2, 3, 5]
  |> map(addone, _)
  |> filter(isprime, _)
  |> sum
  |> 3 * _
  |> ...

I find myself unknowingly using this underscore syntax (or some slight variant) constantly in languages that support it and only realize how helpful it is when transitioning to work in languages that do not support it.

As far as I know, there are currently at least 3.5 libraries/approaches that attempt to address this problem in Julia: Julia's builtin |> function, Pipe.jl, Lazy.jl, and 0.5 for Julia's builtin do notation which is similar in spirit. Not to bash any of these libraries or approaches, but many of them could be greatly simplified if underscore currying was supported by Julia.

@samuela if you'd like to play with an implementation of this idea, you could try out FunctionalData.jl, where your example would look like this:

@p map [1,2,3,4] addone | filter isprime | sum | times 3 _

The last part shows how to pipe the input into the second parameter (default is argument one, in which case the _ can be omitted). Feedback very much appreciated!


Edit: the above is simply rewritten to:

times(3, sum(filter(map([1,2,3,4],addone), isprime)))

which uses FunctionalData.map and filter instead of Base.map and filter. Main difference is the argument order, second difference is the indexing convention (see docs). In any case, Base.map can simply be used by reversing the argument order. @p is quite a simple rewrite rule (left to right becomes inner-to-outer, plus support for simple currying: @p map data add 10 | showall becomes

showall(map(data, x->add(x,10)))

Hack may introduce something like this: https://github.com/facebook/hhvm/issues/6455. They're using $$ which is off the table for Julia ($ is already too overloaded).

FWIW, I really like Hack's solution to this.

I like it too, my main reservation being that I'd still kind of like a terser lambda notation that might use _ for variables / slots and it would be good to make sure that these don't conflict.

Couldn't one use __? What's the lambda syntax you're thinking of? _ -> sqrt(_)?

Sure, we could. That syntax already works, it's more about a syntax that doesn't require the arrow, so that you can write something along the lines of map(_ + 2, v), the real issue being how much of the surrounding expression the _ belongs to.

Doesn't Mathematica have a similar system for anonymous arguments? How do
they handle the scope of the bounding of those arguments?
On Tue, Nov 3, 2015 at 9:09 AM Stefan Karpinski [email protected]
wrote:

Sure, we could. That syntax already works, it's more about a syntax that
doesn't require the arrow, so that you can write something along the lines
of map(_ + 2, v), the real issue being how much of the surrounding
expression the _ belongs to.

—
Reply to this email directly or view it on GitHub
https://github.com/JuliaLang/julia/issues/5571#issuecomment-153383422.

https://reference.wolfram.com/language/tutorial/PureFunctions.html, showing
the # symbol, is what I was thinking of.
On Tue, Nov 3, 2015 at 9:34 AM Jonathan Malmaud [email protected] wrote:

Doesn't Mathematica have a similar system for anonymous arguments? How do
they handle the scope of the bounding of those arguments?
On Tue, Nov 3, 2015 at 9:09 AM Stefan Karpinski [email protected]
wrote:

Sure, we could. That syntax already works, it's more about a syntax that
doesn't require the arrow, so that you can write something along the lines
of map(_ + 2, v), the real issue being how much of the surrounding
expression the _ belongs to.

—
Reply to this email directly or view it on GitHub
https://github.com/JuliaLang/julia/issues/5571#issuecomment-153383422.

Mathematica uses & to delimit it.

Rather than doing something as general as a shorter lambda syntax (which could take an arbitrary expression and return an anonymous function) we could get around the delimiter problem by confining the acceptable expressions to function calls, and the acceptable variables / slots to entire parameters. This would give us a very clean multi-parameter currying syntax à la Open Dyln. Because the _ replaces entire parameters, the syntax could be minimal, intuitive, and unambiguous. map(_ + 2, _) would translate to x -> map(y -> y + 2, x). Most non-function call expressions that you would want to lambdafy would probably be longer and more amiable to -> or do anyway. I do think the trade-off of usability vs generality would be worth it.

@durcan, that sounds promising – can you elaborate on the rule a bit? Why does the first _ stay inside the argument of map while the second one consumes the whole map expression? I'm not clear on what "confining the acceptable expressions to function calls" means, nor what "confining acceptable variables / slots to entire parameters" means...

Ok, I think I get the rule, having read some of that Dylan documentation, but I have to wonder about having map(_ + 2, v) work but map(2*_ + 2, v) not work.

There's also the very nitpicky business that this means that _ + 2 + _ will mean (x,y) -> x + 2 + y whereas _ ⊕ 2 ⊕ _ will mean y -> (x -> x + 2) + y because + and * are the only operators that currently parse as multi-argument function calls instead of as pairwise associative operations. Now, it could be argued that this inconsistency should be fixed, although that would seem to entail the _parser_ having an opinion on which operators are associative and which aren't, which seems bad. But I would argue that any scheme that requires knowing whether the parser parses a + b + c as a single function call or a nested call may be somewhat questionable. Perhaps infix notation should be handled specially? But no, that feels fishy too.

Yep, you have hit on the trade-off. On the one hand, long strings of operators is the syntax that has the most trouble given some of our current parsing choices (although it is hard to fault the semantics of a language feature for depending on the current semantics of the language). On the other, long strings of function calls is where it excels. For example, we could re-write your problem as:

2*_ |> 2+_ |> map(_, v)

Anyway, I don't think that the infix problem should get in the way of having a clean delimiter free option. It would really help with most normal function calls, which is sort of the issue at hand. If you want you could have an optional delimiter to help solve this particular ambiguity (here I am stealing & for that role):

_ ⊕ 2 ⊕ _    # y -> (x -> x + 2) + y
_ ⊕ 2 ⊕ _ &  # (y , x) -> x + 2 + y

This is the best proposal so far, but I'm not entirely sold. It's pretty clear what's going on when the function calls are explicit, but less clear when the function calls are implied by infix syntax.

I like to think of this approach as more of a flexible and generalized currying, rather than a short and sweet lambda (and even then we can get pretty much all the way there with an optional delimiter). I would love something more perfect, but without adding more symbolic noise (the antithesis of this issue) I am not sure how to get there.

Yeah, I like it except for the infix thing. That part may be fixable.

Well, currying in infix position could be a syntax error:

map(+(*(2, _), 2), v)      # curry is OK syntax, but obviously not what you wanted
map(2*_ + 2, v)            # ERROR: syntax: infix curry requires delimitation
map(2*_ + 2 &, v)          # this means what we want
map(*(2,_) |> +(_,2), v)   # as would this

It could also be a warning I guess.

Calling this currying, strikes me as confusing and wrong.

Sure, this is more like partial function application (that optionally becomes an anonymously argumented lambda) I guess. Naming aside, any thoughts?

I'm thinking along the lines of something like this:

  • If _ appears alone as any of the arguments of a function call expression, that function call is replaced with an anonymous function expression taking as many arguments as the function has _ arguments, whose body is the original expression with _ arguments replaced with lambda arguments in order.
  • If _ appears elsewhere, the surrounding expression up to but not including the arrow precedence level or any surrounding parentheses (but not square brackets or curly braces), is replaced with an anonymous function taking as many arguments as there are _ in this expression, whose body is the original expression with _ instances replaced with lambda arguments in order.

Examples:

  • f(_, b) → x -> f(x, b)
  • f(a, _) → x -> f(a, x)
  • f(_, _) → (x, y) -> f(x, y)
  • 2_^2 → x -> 2x^2
  • 2_^_ → (x, y) -> 2x^y
  • map(_ + 2, v) → map(x -> x + 2, v)
  • map(2_ + 2, v) → map(x -> 2x + 2, v)
  • map(abs, _) → x -> map(abs, x)
  • map(2_ + 2, _) → x -> map(y -> 2y + 2, x)
  • map(2_ - _, v, w) → map((x, y) -> 2x - y, v, w)
  • map(2_ - _, v, _) → x -> map((y, z) -> 2y - z, v, x)
  • map(2_ - _, _, _) → (x, y) -> map((z, w) -> 2z - w, x, y)
  • _ → x -> x
  • map(_, v) → x -> map(x, v)
  • map((_), v) → map(x -> x, v)
  • f = _ → f = x -> x
  • f = 2_ → f = x -> 2x
  • x -> x^_ → x -> y -> x^y
  • _ && _ → (x, y) -> x && y
  • !_ && _ → (x, y) -> !x && y

The only place this starts to get dicey is conditionals – those examples just get kind of weird.

This is still a little bit fiddly and unprincipled and there are corner cases, but I it's getting somewhere.

This strikes me as a really bad idea, almost all syntax in Julia is pretty familiar if you have used other programming languages. People looking at syntax sugar like this will have no idea what is going for the "benefit" of saving a couple of characters.

Examples that leave me a bit less happy:

  • 2v[_] → x -> 2v[x] (good)
  • 2f(_) → 2*(x -> f(x)) (not so good)
  • _ ? "true" : "false" → (x -> x) ? "true" : "false"
  • _ ? _ : 0 → (x -> x) ? (y -> y) : 0

I think something deeper is going on here – there's some notion of syntactic positions in which a function object _makes sense_ – and you want to expand out to the closest one of those. The classic position that "wants" a function object is an argument to another function, but the right side of an assignment is another place that can be said to "want" a function object (or maybe it's more accurate to say that would prefer to take a function to being the body of a function).

Perhaps, but the same argument could be (and was) made about the do-block syntax, which I think, on the whole, has been very successful and useful. This is closely related to better syntax for vectorization. It's also not that unprecedented – Scala uses _ in a similar way, and Mathematica uses # similarly.

I think you could also make the case that the _unfamiliar_ choice of
multiple dispatch instead of a single-dispatch dot operator essentially
compels the decision to have succinct syntax for pronominal arguments to
recover the SVO order which people _are_ familiar with.

On Tue, Nov 17, 2015 at 12:09 PM Stefan Karpinski [email protected]
wrote:

Perhaps, but the same argument could (and was) made about the do-block
syntax, which I think, on the whole, has been very successful and useful.
This is closely related to better syntax for vectorization. It's also not
that unprecedented – Scala uses _ in a similar way, and Mathematica uses #
similarly.

—
Reply to this email directly or view it on GitHub
https://github.com/JuliaLang/julia/issues/5571#issuecomment-157437223.

This also exists in C++ with multiple library solutions in Boost in particular, which use _1, _2, _3 as arguments (e.g. _1(x, y...) = x, _2(x, y, z...) = y etc), the limitation being that for being able to call e.g. fun(_1) for x -> fun(x), fun must be explicitly made compatible with the library (usually via a macro call, to make fun accept a "lambda type" as a parameter).

I would really like this terse lambda notation available in Julia.
Concerning the problem of 2f(_) desugaring to 2*(x -> f(x)): would it make sense to modify the rules along the lines of "if the first rule applies, to e.g. f(_), then re-evaluate recursively the rules with f(_) playing the role of _. This would allow also e.g. f(g(_)) → x -> f(g(x)), with the "parentheses rule" allowing easily to stop at the desired level, e.g. f((g(_))) → f(x->g(x)).

I like the name "terse lambda notation" a lot for this. (Much better than currying).

I would really prefer the explicitness of _1, _2, _3 if your passing multi-argument lambdas. In general I often find reusing variable names in the same scope can be confusing... and having _ be x and y in _the same expression_ just seems crazy confusing.

I've found this same terse scala _-syntax has caused a bit of confusion (see all the uses of _ in scala).

Besides, often you want to do:

x -> f(x) + g(x)

or similar, and I think I'd be surprised if the following didn't work:

f(_) + g(_)

Also you may wish to switch the order of the arguments:

x, y -> f(y, x)
f(_2, _1)  # can't do with other suggested _ syntax

I think it would be fine for the syntax to allow for explicit anonymous argument numbering (_1, _2, _3...etc.), but the main problem still stands: when exactly do you promote a partially applied function to a terse lambda? And what exactly is the lambda body? I would probably error on the side of being explicit (with a delimiter) rather than implicitly using some kind of complex promotion rules. What should

foo(_1, _1 + _2  + f(_1, v1) + g(_2, v3), _3 * _2, v2) + g(_4, v4) +
 f(_2, v2) + g(_3, v5) + bar(_1, v6)

mean exactly? Using a delimiter (I will use λ) things are somewhat more clear:

λ(foo(_1, λ(_1 + _2)  + λ(f(_1, v1) + g(_2, v3)), _3 * _2, v2) + g(_4, v4)) + 
λ(f(_2, v2) + g(_3, v5) + bar(_1, v6))

This obviously is a MethodError: + has no method matching +(::Function, ::Function), but at least I can tell that from the way it is written.

I do think @StefanKarpinski might be on to something when he said that there are a few seemingly obvious syntactic positions that expressions can take that strongly imply they are function bodies. Arrow precedence takes care of a number of these, but there are still a few confusing cases. Promising, but definitely requires some careful thought.

This is definitely a tradeoff between terseness vs generality and legibility. Of course there's no point in introducing something that is less terse than the existing -> notation. But I also think that a delimiter seems worthwhile.

Maybe not terse enough, but how about a prefix version of -> that captures _ arguments? Eg

(-> 2f(_) + 1)

I guess the prefix form should have pretty low precedence. This might actually allow to leave out the parentheses in some case, eg

map(->_ + 1, x)

Right now I am messing around with implementing https://github.com/JuliaLang/julia/issues/5571#issuecomment-157424665

As a macro, that transforms all such occurrences in the line.
The tricky part is implementing precedence.

I probably won't finish it in the next 12 hours cos it is lhome time here
(maybe in the next 24, but I might have to go away)
Anyway, once that is done we can play with it.

A odd one that comes out of https://github.com/JuliaLang/julia/issues/5571#issuecomment-157424665

  • f(_,_) → x,y -> f(x,y) (this is reasonable)
  • f(_,2_) → ??

    • f(_,2_) → x,y -> f(x,2y) (reasonable)

    • f(_,2_) → x-> f(x,y->2y) (what I think the rule is suggesting, and what my prototype produces)

But I am not sure I have it right.

So here is my prototype.
http://nbviewer.ipython.org/gist/oxinabox/50a1e17cfb232a7d1908

In-fact it definitely fails some of the tests.

It is not possible to consider bracketing in the current AST layer -- they are often (always?) already resolved out.

Still it is enough to play with I think

Some rules from magrittr in R that might be useful:

If a chain starts with . , it is an anonymous function:

. %>% `+`(1)

is the same as function(x) x + 1

There are two modes of chaining:
1) Chaining by inserting as the first argument, as well as to any dots that appear.
2) Chaining only to dots that appear.

The default mode is mode 1. However, if dot appears by itself as an argument to the function being chained, then magrittr switches to mode 2 for that step in the chain.

So

2 %>% `-`(1) 

is 2 - 1,

and

1 %>% `-`(2, . )

is also 2 - 1

Mode 2 can also be specified by wrapping in brackets:

2 %>% { `-`(2, . - 1) }

would be the same as 2 - (2 - 1).

Also just a note that being able to smartly switch between mode 1 and mode 2 almost completely solves the issue that Julia is not very consistent about having the argument that would likely get chained to in the first position. I also forgot that to note that brackets can allow for evaluating a chunk of code. Here is an example from the magrittr manual:

iris %>%
{
n <- sample(1:10, size = 1)
H <- head(., n)
T <- tail(., n)
rbind(H, T)
} %>%
summary

This is only a half-formed idea at the moment, but I wonder if there's a way that we could resolve the "terse lambda" and "dummy variable" issues at the same time by modifying the Tuple constructor such that a missing value returns a lambda that returns a Tuple instead of a Tuple? So, (_, 'b', _, 4) would return (x, y) -> (x, 'b', y, 4).

Then, if we subtly change the semantics of function calling such that foo(a, b) means "apply foo to the Tuple (a, b) or if the argument is a function, then apply foo to the Tuple returned by the function". This would make foo(_, b, c)(1) equivalent to apply(foo, ((x) -> (x, b, c))(1)).

I think this still doesn't solve the infix notation issue, but personally I'd be happy with terse lambdas that only work with parenthesized function calls. After all, 1 + _ can always be rewritten +(1, _) if absolutely necessary.

@jballanc However, tuple construction and function application are two quite distinct concepts. At least, unless my understanding of julia's semantics is seriously flawed.

@samuela What I meant by that is that foo(a, b) is equivalent to foo((a, b)...). That is, the arguments to a function can be conceptually thought of as a Tuple, even if the Tuple is never constructed in practice.

@I've tried to read through this discussion, but it's too long for me to keep track of everything that's been said - so sorry if I'm repeating more than necessary here.

I'd just like to put in a vote for making |> a complement to the do "magic". As far as I can see, the easiest way to do that would be to let it mean that

3 |> foo == foo(3) # or foo() instead of just foo, but it would be nice if the parentheses were optional
3 |> foo(1) == foo(1, 3)
3 |> foo(1,2) == foo(1,2,3)

In other words, a |> f(x) does to the _last_ argument what f(x) do; a; end does to the _first_. This would immediately make it compatible with map, filter, all, any et. al., without adding the complexity of scoping _ parameters, and given the already existing do syntax I don't think it puts an unreasonable conceptual burden on the readers of the code.

My main motivation for using a pipe operator like this is collection pipelines (see #15612), which I think is an incredibly powerful construct, and one which is gaining ground in many languages (indicating that it's both a feature that people want, and one they will understand).

That's the @>> macro from https://github.com/MikeInnes/Lazy.jl.

@malmaud Nice! I like that this is already possible :D

However, the difference in readability between these two variants is really big:

# from Lazy.jl
@> x g f(y, z)

# if this became a first-class feature of |>
x |> g |> f(y, z)

I think the main readability problem is that there are no visual clues to tell where the boundaries between expressions are - the spaces in x g and g f(x, will significantly affect the behavior of the code, but the space in f(x, y) will not.

Since @>> already exists, how feasible is it to add this behavior to |> in 0.5?

(I don't want to spend too much energy on the notation of the operator, so don't dismiss this proposal solely on the matter of notation. However, we can note that "piping" seems like a natural notion for this (c.f. the term "collection pipeline"), and that e.g. F# already uses |> for this, although there of course it has slightly different semantics since F# functions are different than Julia functions.)

Ya, for sure I agree on the readability front. It wouldn't be technically challenging to make |> behave as you're describing in 0.5, it's just a design question.

Similarly, it'd be possible to make Lazy.jl's @>> macro parse functions chained by |>.

Hm. I'll start working on a PR for Lazy.jl then, but that doesn't mean I'd like this not to be in 0.5 :) I don't think I know enough about the Julia parser and how to change the behavior of |> to help with that, though, unless I get some pretty extensive mentoring.

I don't think I mentioned in this thread, but I have another chaining package, ChainMap.jl. It always substitutes into _, and conditionally inserts into the first argument. It also tries to integrate mapping. See https://github.com/bramtayl/ChainMap.jl

So our current list of various efforts etc
I think it is worth people checking these out, (ideally before opinioning, but w/e)
they are all slightly different.
(I am attempting to order chronologically).

Packages

Nonpackage Prototypes

Related:


Perhaps this should be editted in to one of the top posts.

updated: 2020-04-20

This is more of a type system experiment than an actual attempt to implement partial application, but here's a weird one: https://gist.github.com/fcard/b48513108a32c13a49a387a3c530f7de

usage:

include("partial_underscore_generated.jl")
using GeneratedPartial

const sub = partialize(-)
sub(_,2)(1) == 1-2
sub(_,_)(1,2) == 1-2
sub(_,__)(1)(2) == 1-2
sub(__,_)(2)(1) == 1-2 #hehehe

# or
@partialize 2 Base.:+ # evily inserts methods in + and allows partializations for 2 arguments
(_+2)(1) == 1+2

# fun:
sub(1+_,_)(2,3) == sub(1+2,3)
sub(1+_,__)(2)(3) == sub(1+2,3)
(_(1)+_)(-,1) == -1+1

# lotsafun:
appf(x::Int,y::Int) = x*y
appf(f,x) = f(x)
@partialize 2 appf

appf(1+_,3)(2) == appf(1+2,3)
appf(?(1+_),3) == appf(x->(1+x), 3)
appf(?sub(_,2),3) == appf(x->x-2,3) # I made a method *(::typeof(?),::PartialCall), what of it!!?

# wooooooooooooooooooooooooooooooooo
const f = sub
f(_,f(_,f(_,f(_,f(_,f(_,f(_,f(_,f(_,_)))))))))(1,2,3,4,5,6,7,8,9,10) == f(1,f(2,f(3,f(4,f(5,f(6,f(7,f(8,f(9,10)))))))))
f(_,f(__,f(___,f(____,f(_____,f(______,f(_______,f(________,f(_________,__________)))))))))(1)(2)(3)(4)(5)(6)(7)(8)(9)(10) == f(1,f(2,f(3,f(4,f(5,f(6,f(7,f(8,f(9,10)))))))))

# this answers Stefan's concern (which inspired me to make this hack in the first place)
#
#    const pmap = partialize(map)
#    map(f(_,a),   v) == map(x->f(x,a), v)
#    pmap(?f(_,a), v) == map(x->f(x,a), v)
#    pmap(f(_,a),  v) == x->map(f(x,a), v)
#
# it adds a few other issues, of course...


Certainly not an implementation proposal, but I find it fun to play with, and maybe someone can get half an good idea out of it.

P.S. Forgot to mention that @partialize also works with integer array literals and ranges:

@partialize 2:3 Base.:- # partialized for 2 and 3 arguments!

(_-_-_)(1,2,3) == -4
(_-_+_)(1,2,3) == +2

OK, I have been thinking about function composition, and although IMO its clear in the SISO case I guess I was thinking of how I use Julia's MISO (quasi-MIMO?) in small chained code blocks.

I like the REPL. A lot. It's cute, it's cool, it lets you experiment just like MATLAB or python or the shell. I like to take code from a package or file and copy-paste it into the REPL, even multiple lines of code. The REPL evaluates that on every line and shows me whats going on.

It also returns/defines ans after every expression. Every MATLAB user knows it (though at this stage, this is a bad argument!). Probably most Julia users have seen/used it before. I use ans in the odd situations that I'm playing with something piece-wise, realizing I want to add another step to what I wrote above. I dislike that using it is kind-of destructive, so I do tend to avoid it when possible, but _every_ proposal here is dealing with return lifetimes of only one step of composition.

To me, _ being magical is just _odd_, but I understand that many people may disagree. So, if I _want_ to copy-paste code from packages into the REPL and watch it run, and if I want a syntax that doesn't seem magical, then I might propose:

@repl_compose begin
   sin(x)
   ans + 1
   sqrt(ans)
end

If the function returns multiple outputs, I can insert ans[1], ans[2], etc on the next line. It fits neatly the single-level of composition model already, Julia's MISO model, and it already _is very standard Julia syntax_, just not in files.

The macro is easy to implement - just convert Expr(:block, exprs...) into Expr(:block, map(expr -> :(ans = $expr), exprs) (also a let ans at the start, and perhaps there could be a version that makes an anoymous function that takes an input or something?). It wouldn't _have_ to live in base (though, the REPL is built into Julia, and it kind-of goes with that).

Anyway, just my perspective! This was a long thread, that I haven't looked at in a long time!

It also returns/defines ans after every expression. Every MATLAB user knows it (though at this stage, this is a bad argument!).

Actually, the other argument is that if _ is used in function chaining, then the REPL should also return _ rather than ans (for me, this would be enough to remove the "magic").

There's a fair amount of precedent for using _ as the "it value" in languages. Of course, that conflicts with the proposed idea of using _ as a name that discards assignments and for terser lambdas.

I'm pretty sure this exists somewhere in Lazy.jl as @as and begin ...

The idea of using . for chaining was discarded early in the conversation, but it has a long history of usefulness and would minimize the learning curve for adopters from other languages. The reason it's important to me is because

type Track
  hit::Array{Hit}
end
type Event
  track::Array{Track}
end

event.track[12].hit[43]

gives me the 43rd hit of the 12th track of an event when track and hit are simple arrays, so

event.getTrack(12).getHit(43)

should give me the same thing if they have to be served dynamically. I don't want to have to say

getHit(getTrack(event, 12), 43)

It gets worse the deeper you go. Since these are simple functions, it makes the argument broader than that of function chaining (a la Spark).

I'm writing this now because I just learned about Rust's traits, which could be a good solution in Julia for the same reasons. Like Julia, Rust has data-only structs (Julia type), but then they also have impl for binding functions to the name of a struct. As far as I can tell, it's pure syntactic sugar, but it allows the dot notation I described above:

impl Event {
  fn getTrack(&self, num: i32) -> Track {
    self.track[num]
  }
}

impl Track {
  fn getHit(&self, num: i32) -> Track {
    self.track[num]
  }
}

which in Julia could be

impl Event
  function getTrack(self::Event, num::Int)
    self.track[num]
  end
end

impl Track
  function getHit(self::Track, num::Int)
    self.hit[num]
  end
end

The proposed syntax above doesn't do any interpretation of self: it's just a function argument, so there should be no conflicts with multiple dispatch. If you want to do a minimal interpretation of self, you could make the type of the first argument implicit, so that the user doesn't have to type ::Event and ::Track in each function, but a nice thing about not doing any interpretation is that "static methods" are just functions in the impl that don't have self. (Rust uses them for new factories.)

Unlike Rust, Julia has a hierarchy on types. It could also have a similar hierarchy on impls to avoid code duplication. A standard OOP could be built by making the data type and method impl hierarchies exactly the same, but this strict mirroring is not necessary and is in some cases undesirable.

There's one sticky point with this: suppose I named my functions track and hit in the impl, rather than getTrack and getHit, so that they conflicted with the arrays track and hit in the type. Then would event.track return the array or the function? If you immediately use it as a function, that could help to disambiguate, but types can hold Function objects, too. Maybe just apply a blanket rule: after the dot, first check the corresponding impl, then check the corresponding type?

On second thought, to avoid having two "packages" for what is conceptually the same object (type and impl), how about this:

function Event.getTrack(self, num::Int)
  self.track[num]
end

to bind the function getTrack to instances of type Event such that

myEvent.getTrack(12)

yields the same bytecode as the function applied to (myEvent, 12)?

What's new is the typename-dot-functionname syntax after the function keyword and how it's interpreted. This would still allow for multiple dispatch, a Python-like self if the first argument is the same as the type it's bound to (or left implicit, as above), and it allows for a "static method" if the first argument is not present or typed differently from the type it's bound to.

@jpivarski Is there a reason you think the dot syntax (which, by reading this thread, has a lot of disadvantages) is better than some other construct that allows chaining? I still think creating something like do but for the last argument, supported by some form of piping syntax (e.g. |>) would be the best way forward:

event |> getTrack(12) |> getHit(43)

The main reason I can see that something like Rust's approach could be better is that it effectively uses the left-hand side as a namespace for functions, so you might be able to do things like parser.parse without conflicting with the existing Julia Base.parse function. I would be in favor of providing both the Rust proposal and Hack style piping.

@tlycken That is ambiguous syntax though, depending on precedence.
Remembering the Precidence of |> vs call may be confusing, since it does not really give any hints.
(Nor do several of the other options suggested.)

Consider

foo(a,b) = a+b
foo(a) = b -> a-b

2 |> foo(10) == 12   #Pipe Precedence > Call Precedence 
2 |> foo(10) == 8     #Pipe Precedence < Call Precedence   

@oxinabox I'm actually not suggesting it to be "just" a regular operator, but rather a syntax element of the language; 2 |> foo(10) desugars to foo(10, 2) much the same way foo(10) do x; bar(x); end desugars to foo(x -> bar(x), 10). That implies pipe precedence over call precedence (which, I think, is what makes most sense anyway).

Just on the subject of syntax, . is less visually obtrusive and certainly more standard than |>. I can write a fluent chain of functions separated by . with no spaces (one character each) and anyone could read it; with |>, I'd have to add spaces (four characters each) and it would be visual speedbump to most programmers. The analogy to shell scripting's | is nice, but not immediately recognizable because of the >.

Am I reading this thread correctly that the argument against dot is that it's ambiguous whether it should get a member datum from the type or a member function from the impl (my first suggestion) or the function namespace (my second suggestion)? In the second suggestion, functions defined in the function namespace created by a type must be defined _after_ the type is defined, so it can refuse to overshadow a member datum right there.

Adding both namespace dots (my second suggestion) and |> would be fine by me; they're rather different in purpose and effect, despite the fact that they can both be used for fluent chaining. However, |> as described above isn't completely symmetric with do, since do requires the argument it fills to be a function. If you're saying event |> getTrack(12) |> getHit(43), then |> applies to non-functions (Events and Tracks).

If you're saying event |> getTrack(12) |> getHit(43), then |> applies to non-functions (Events and Tracks).

Actually, no - it applies to the function incantations _on its right_ by inserting its left operand as the last argument to the function call. event |> getTrack(12) is getTrack(12, event) because of what was on the right, not because of what was on the left.

This would have to mean a) precedence over function calls (since it's a rewrite of the call), and b) left-to-right application order (to make it getHit(43, getTrack(12, event)) rather than getHit(43, getTrack(12), event)).

But getTrack's signature is

function getTrack(num::Int, event::Event)

so if event |> getTrack(12) inserts event into getTrack's last argument, it's putting an Event into the second argument, not a Function. I just tried the equivalent with do and the first argument, and Julia 0.4 complained that the argument needs to be a function. (Possibly because do event end is interpreted as a function returning event, rather than the event itself.)

Function chaining seems, to me, to be a separate issue from much of what's being discussed around the dot (.) syntax. For example, @jpivarski , you can already accomplish much of what you mention from Rust in Julia without any new features:

type TownCrier
  name::AbstractString
  shout::Function

  function TownCrier(name::AbstractString)
    self = new(name)
    self.shout = () -> "HELLO, $(self.name)!"
    self
  end
end

tc = TownCrier("Josh")
tc.shout()                                #=> "HELLO, Josh!"
tc.name = "Bob"
tc.shout()                                #=> "HELLO, Bob!"

Without trying to derail the conversation too much, I'd suggest that what we really need to resolve is how to do efficient function currying in Julia. Questions about how to specify positions for arguments in a function chain would melt away if we had a good way to curry functions. Additionally, constructions like the above would be cleaner if the function body could be specified and simply curried with "self" on construction.

@andyferris I've been using Python and I really like _ referring to the result of the previous expression. It doesn't work inside functions though. It would be great if we could get it to work anywhere: inside begin blocks, functions, etc.

I think this could totally replace chaining. It doesn't leave any ambiguity about argument order. For example,

begin
    1
    vcat(_, 2)
    vcat(3, _)
end

# [3, 1, 2]

As @MikeInnes mentioned, this is already available in @_ in Lazy.jl (and although it didn't work that way originally, ChainMap.jl also uses this kind of chaining now).

Hopefully, this could be able to work together with dot fusion, at least inside blocks

begin
    [1, 2, 3]
    .+(_, 2)
    .*(_, 2)
    .-(10, _)
end

or, using @chain_map syntax,

begin
    ~[1, 2, 3]
    +(_, 2)
    *(_, 2)
    -(10, _)
end

Currently there is a way for function chaining with objects if the function is defined inside the constructor. For example, the function Obj.times :

type Obj
    x
    times::Function
    function Obj(x)
       this = new(x)
       this.times =  (n) -> (this.x *= n; this)
       this
    end
end

>>>Obj(2).times(3)
Obj(6,#3)

What about the implementation of member functions (especial functions) defined outside the type definition. For instance, the function Obj.times would be written as:

member function times(this::Obj, n)
     this.x *= n
     return this
end

>>>Obj(2).times(3)
Obj(6,#3)

where member is a special keyword for member functions.
Member functions have access to the object data. Later, they will be called using dot after the object variable.
The idea is to reproduce the behavior of functions defined inside constructors by using "member" functions defined outside the type definition.

Something like this is done in Rust with method syntax. It's conceptually distinct from function chaining, although it could be used to make chaining look like it does in some OO languages. Probably best to address the issues separately.

I have read this and some related issues, here is my proposal:

Basic chaining:
in1 |> function1
Same as: in1 |> function1(|>)

in2 |> function2(10)
Same as: in2 |> function2(|>,10)

Even more chaining:
in1 |> function1 |> function2(10,|>)

Chain branching and merging:
Branch twice with branches out1, out2:
function1(a) |out1>
function2(a,b) |out2>

Use branch out1 and out2:
function3(|out1>,|out2>)

What about lazyness?
Do we need something like the function!(mutating_var) convention?
For lazy functions we could use function?() ...

And by using indentation properly it is easy to visually track data dependencies on the associated call graph.

I just played around with a pattern for function chaining with the existing |> operator. For example, these definitions:
````julia

Filter

immutable MyFilter{F}
flt::F
end

function (mf::MyFilter)(source)
filter(mf.flt, source)
end

function Base.filter(flt)
MyFilter(flt)
end

Take

immutable MyTake
n::Int64
end

function (mt::MyTake)(source)
take(source,mt.n)
end

function Base.take(n)
MyTake(n)
end

Map

immutable MyMap{F}
f::F
end

function (mm::MyMap)(source)
map(mm.f, source)
end

function Base.map(f)
MyMap(f)
end
enable this to work: julia
1:10 |> filter(i->i%2==0) |> take(2) |> map(i->i^2) |> collect
`` Essentially the idea is that functions likefilterreturn a functor if they are called without a source argument, and then these functors all take one argument, namely whatever is "coming" from the left side of the|>. The|>`` then just chains all these functors together.

filter etc. could also just return an anonymous function that takes one argument, not sure which of these options would be more performant.

In my example I'm overwriting map(f::Any) in Base, I don't really understand what the existing definition of map does...

I just came up with this pattern, and my somewhat cursory look around didn't show any discussion of something like that. What do folks think? Might this be useful? Can folks think of drawbacks of this? If this does work, maybe the existing design is actually flexible enough to enable a pretty comprehensive chaining story?

This doesn't seem workable for arbitrary functions, only those for which MyF has been defined?

Yes, this works only for functions that opt-in. Clearly not a very general solution, and in some sense the same story as with vectorization, but still, given that not everything we would hope for will make it for 1.0, this pattern might enable a whole bunch of scenarios where folks had to resort to macros right now.

Essentially the idea is that functions like filter return a functor if they are called without a source argument, and then these functors all take one argument, namely whatever is "coming" from the left side of the |>.

This is, almost exactly, the essence of Clojure's transducers. The notable difference is that Clojure built transducers on top of the concept of reducers. In short, every function that operates on a collection can be decomposed into a "mapping" function and a "reducing" function (even if the "reducing" function is simply concat). The advantage of representing collection functions in this way is that you can re-arrange execution so that all "mapping" can be pipelined (especially nice for large collections). Transducers, then, are just an extraction of these "mappings" returned when called without a collection to operate on.

No need for this to be so complicated. Functions can opt-in to currying with closures:

Base.map(f)    = (xs...) -> map(f, xs...)
Base.filter(f) = x -> filter(f, x)
Base.take(n)   = x -> take(x, n)

Of course, this isn't something that a package should do since it changes the meaning of these methods for all packages. And doing it piecemeal like this isn't terribly intuitive — which arguments should get priority?

I'd prefer a call-site syntactic solution like has been discussed above, lowering f(a, _, b) to x -> f(a, x, b). It's tricky, though, as noted in the long discussion above.

No need for this to be so complicated. Functions can opt-in to currying with closures

Yes, I suggested that above already, I was just not sure whether there is a performance difference between these two.

which arguments should get priority?

Yeah, and then we actually have things like filter and take, where in one case we have the collection as the first and in the other as the last argument... I kind of feel that at least for iterator like operations there might be typically an obvious answer to that question.

Once _ is an available special symbol

Yes, I totally agree that there is a more general solution out there, and @malmaud's might be it.

There's no perf diff as closures essentially just generate the code you wrote by hand anyway. But since you're just currying, you could write a function to do that for you (curry(f, as...) = (bs...) -> f(as..., bs...)). That takes care of map and filter; there have also been proposals in the past to implement a curry that implements a sentinel value like curry(take, _, 2).

I came here, because I am currently learning three languages: D, Elixir and now Julia. In D there is the uniform function call syntax, like in Rust, in Elixir you have the pipe operator. Both basically implement the kind of function chaining suggested here and I really liked this feature in both languages, since it is easy to grasp, seems easy to implement, and can make code using streams and other kinds of data pipelines so much more readable.

I have only seen a brief tutorial about julia's syntax so far, but I immediately googled this feature, because I hoped Julia would also have something like this. So I guess this is a +1 for this feature request from my side.

Hi folks,

Please allow me to +1 this feature request. This is very badly needed. Consider the following Scala syntax.

Array(1,2,3,4,5)
  .map(x => x+1)
  .filter(x => x > 5)
  .reduce(_ + _)

People that have used Spark or other MapReduce-based big data tools will be very familiar with this syntax and will have written large and complicated jobs in this way. Even R, comparatively ancient, allows the following.

c(1,2,3,4,5) %>%
  {. + 1} %>%
  {.[which(. > 5)]} %>%
  sum

Note the clever use of code blocks as a substitute to proper functional programming - not the prettiest, but powerful. In Julia, I can do the following.

[1,2,3,4,5] |> 
  _ -> map(__ -> __ + 1, _) |>
  _ -> filter(__ -> __ < 5, _) |>
  _ -> reduce(+, _)

But this is horrid and ugly. If we cannot have object-oriented code a la Scala, pipe operators become incredibly important. The simplest solution is for the pipe to feed in the _first argument_, unless a wildcard such as _ is used explicitly, but this would only make sense if map were changed to take the data structure in as first argument and function as second.

There should also be some equivalent of Scala's Array(1,2,3).map(_ + 1) to avoid excessive _ -> _ + 1 and similar syntax. I like the idea above where [1,2,3] |> map(~ + 1, _) gets translated to map(~ -> ~ + 1, [1,2,3]). Thanks for looking.

For the latter, we have broadcasting with compact syntax [1, 2, 3] .+ 1 It's quite addictive. Something like it for reduction (and maybe filtering) would be insanely cool, but seems like a big ask.

It is a reasonable point to note that both piping and do fight for the first function argument.

I will remind new comes to the thread, that we have,
not one, not two, but FIVE packages providing extensions to julia's base SISO piping functionality, towards the syntaxs suggested.
see list at: https://github.com/JuliaLang/julia/issues/5571#issuecomment-205754539

It is a reasonable point to note that both piping and do fight for the first function argument.

If we were to get a extra piping functionality in base, that was not position marked with _ etc.
Then I would think it would add arguments to the final position, not the first.
that would make it more like "pretend currying/partial application"

My post above is meant to be a simple example designed to illustrate the syntax issues in question.

In reality, often hundreds of operations are used in one chain, many of them non-standard. Imagine working with natural language big data. You write a sequence of transformations that take a string of characters, filter out Chinese characters, split by whitespace, filter out words such as "the", transform each word into a "standardized" form via black-box software tool used by your web server, append information about how common each word is via another black-box tool, sum weights over each unique word, 100 other operations, etc etc.

These are situations I am considering. Doing such operations without using method calls or pipes is a non-starter due to sheer size.

I don't know what is the best design, I simply encourage everybody to consider the use cases and more elaborate syntax than what is currently in place.

This should work in Juno and Julia 0.6

```{julia}
using LazyCall
@lazy_call_module_methods Base Generator
@lazy_call_module_methods Iterator filter

using ChainRecursive
start_chaining()


```{julia}
[1, 2, 3, 4, 5]
@unweave ~it + 1
Base.Generator(it)
@unweave ~it < 5
Iterators.filter(it)
reduce(+, it)

I have a question regarding some syntax I have seen in the comments on this issue:
https://stackoverflow.com/questions/44520097/method-chaining-in-julia

@somedadaism, issues are for issues and not to "advertise" stack-overflow questions. Also, Julia-people are very active on SO and (even more so) on https://discourse.julialang.org/. I'd be very surprised if you didn't get a response to most questions there very quickly. And, welcome to Julia!

Jeez, unbelievable how complicated this can be oO. +1 for some decent syntax. For me the primary use of piping is also working with data (frames). Think dplyr. Personally I do not really care about passing by first/ last as a default but I guess most package developers will have their functions accept data as first argument - and what about optional arguments? +1 for something like

1 |> sin |> sum(2, _)

As has been mentioned earlier readability and simplicity is super important. I wouldn't want to miss the entire dplyr/tidyverse style of doing things for data analysis...

I would like to add that I find very useful the Elixir's multiline syntax for the pipe operator too.

1
|> sin
|> sum(2)
|> println

Is the equivalent of println(sum(sin(1),2))

Just to note a proposal in the javascript world. They use the ? operator instead of _ or ~ which we already has meaning of (_ to ignore something and ~ as bitwise not or formular). Given we currently use ? the same as javascript, we might use it for the currying placeholder too.

this is how thier proposal looks (it’s in javascript, but also valid in julia :)

const addOne = add(1, ?); // apply from the left
addOne(2); // 3

const addTen = add(?, 10); // apply from the right
addTen(2); // 12

// with pipeline
let newScore = player.score
  |> add(7, ?)
  |> clamp(0, 100, ?); // shallow stack, the pipe to `clamp` is the same frame as the pipe to `add`.

const maxGreaterThanZero = Math.max(0, ...);
maxGreaterThanZero(1, 2); // 2
maxGreaterThanZero(-1, -2); // 0

A summary because I started to write one for other reasons.
See also my prior list of related packages comment.

Any messing with _ is non breaking an can be done in 1.x, because https://github.com/JuliaLang/julia/pull/20328

This all boils down to two main options (Other than status quo).
Both can (for all intents and purposes) be implemented with a macro to rewrite the syntax.

Messing with _ to make anon functions

@StefanKarpinski's Terse Lambdas, or similar syntax where where the presence of an _ (in a RHS expression), indicates that that whole expression is an anonymous function.

  • this can almost be handled by a macro see.

    • Only thing that can't be done is (_) not being the same as _. Which is just the identity function so doesn't really matter

    • This would apply everywhere, so would not only be useful with |> , but also eg with writing things compactly like map(log(7,_), xs) or log(7, _).(xs) to take the log with base 7 of each element of xs.

    • I personally favor this, if we were doing anything.

Messing with |> to make it perform substitutions

Options include:

  • Make it make its RHS act like they curry

    • actually I think this is breaking, (though maybe there is a non-breaking version that checks the method table. I think that is instead just confusing though)

  • Make it make _ act special (see the options above, and/or various ways to fake it via rewrite)

    • one way to do this would be allow the creation of infix macros then one could write @|>@ and define it how you want in packages (this has already been closed once https://github.com/JuliaLang/julia/issues/11608)

    • or give it those special properties intrinsically

  • We have tons of macro implementations to do this, as I said see my list of related packages
  • Some people also propose changing its to make it (unlike all other operators) able to cause an expression on the line before it occurs to not end. SO you can write
a
|> f
|>g

Rather than the current:

a |>
f |>
g

(Implementing this with a macro not possible without bracketting the block. And bracketting the block already just makes it work anyway)

  • I personally don't like these proposals as they make |> (an already disliked operator) super magic.

Edit: as @StefanKarpinski points out below, this is always actually breaking change.
Because someone could be depending on typeof(|>) <: Function.
And these changes would make it into an element of the language syntax.

Bonus option: it ain't ever happening option: add currying everywhere #554

It it way too late in the language to add currying.
It would be crazy breaking, adding huge piles of ambiguities everywhere.
And just be very confusing.

I think with these two options it basically covers everything worth considering.
I don't think anything else insightful has been said (i.e. not counting "+1 want this"; or repetitions of microvarients on the above).

I am quite tempted to deprecate |> in 0.7 so that we can later introduce it with more useful and possibly non-function-like semantics which I suspect are necessary for making piping work well.

I am quite tempted to deprecate |> in 0.7 so that we can later introduce it with more useful and possibly non-function-like semantics which I suspect are necessary for making piping work well.

The only breaking case up on that list is when |> makes its right hand side pretend to be currying.
Either to insert its arguments into the first, or into the last last argument position(/s), or some other fixed position (2nd might make sense).
Without using _ as a marker for which argument to insert into.

No other breaking proposals have been made that anyone in this thread took vaguely seriously.|
I would be surprised if there are other sensible and yet breaking definitions for that operation,
that noone has suggested yet in these last almost 4 years.

Anyway deprecating it wouldn't be terrible.
Packages that use it can still have it via one of the macro packages.

Another idea might be to keep |> with the current behavior and introduce the new functionality under a different name that doesn't require the use of the shift key, such as \\ (which doesn't even parse as an operator right now). We talked about this on Slack once, but I think the history is probably lost to the sands of time.

Piping is often used interactively, and the ease of typing the operator affects how "light" it feels to use. A single character | could be nice too.

A single character | could be nice too.

Interactively yes, but then it's enough to have it in .juliarc.jl (which I have had for a long time ;-p )

that doesn't require the use of the shift key

Notice that this is a highly locale dependent property. E.g. my Swedish keyboard has shipped out a number of characters to shift and (rather horrible) AltGr combinations to make space for another three letters.

Is there any tradition of using |> for this purpose? [Mathematica] (http://reference.wolfram.com/language/guide/Syntax.html) has // for postfix function application, which should be easy to type in most keyboards and might be available, if it isn't already used for comments (as in C++) or integer division (as in Python).

Something with | in it has the nice connection with shell scripting, though if course a single | would be expected to be bitwise OR. Is || taken for logical OR? What about |||? Typing a hard-to-reach character three times isn't much harder than typing it once.

Is there any tradition of using |> for this purpose?

I believe the tradition of |> derives from the ML family of languages. When it comes to operators, few programming language communities have explored this space like the ML/Haskell community has. A small selection of examples:

To add to the above list, R uses %>% - and though that language is dated, I think its pipe functionality is very well designed. One of the things that makes it effective is the curly brace syntax, which allows one to write things like

x %>% { if(. < 5) { a(.) } else { b(.) } }

which would be quite a bit more verbose in Julia due to its use of end statements. Though my example above is abstract, plenty of people use similar syntax when writing data preprocessing code. With any of the current proposals being discussed, can something similar to the above be achieved - perhaps through use of parentheses?

I'd also encourage use of a pipe operator that gives some visual indication that arguments are being piped forward, such as though the > symbol. This provides a helpful cue for beginners and those unfamiliar with functional programming.

Even though the proposed usages of |> are not incompatible with the current typical usage syntactically, they _are_ incompatible with |> being an operator – since most of them involve giving |> far more power than a mere infix function. Even if we're sure we want to retain x |> f |> g to mean g(f(x)), leaving it as a normal operator will probably preclude any further enhancements. While changing |> into a non-operator that does postfix function application might not break its _typical_ usage for chained function application, it would still not be allowable since it would break _atypical_ usage of |> – anything that relies on it being an operator. Breaking atypical usage is still breaking and therefore not allowed in 1.x releases. If we want to doing any of the above proposals with |> as far as I can tell, we need to make |> syntax rather than a function in 1.0.

@StefanKarpinski Is making |> syntax rather than a function even on the table at the moment? Is it possible to put it on the table in time for having it in place for 1.0?

@StefanKarpinski Is making |> syntax rather than a function even on the table at the moment? Is it possible to put it on the table in time for having it in place for 1.0?

Deprecating it in 0.7 and removing it outright from 1.0 is on the table.
Then bring it back some time during 1.x as a syntax element.
Which would at that point be a nonbreaking change.

Someone would need to do it, but I don't think it's a terribly difficult change, so yes, it's on the table.

What would |> be deprecated to? An implementation in Lazy.jl?

x |> f can be deprecated to f(x).

How about deprecating l> but at the same time introducing say ll> that has the same behavior as the current l>?

If we only go with the deprecation without some replacement, packages that rely on the current behavior would essentially be left without a good option. If they get an expression that is a little less nice in the meantime they can continue with their current design but we still leave the option on the table to find a really good solution for l> in the future.

This affects the Query and friends ecosystem in a big way: I’ve created a system that is quite similar to the pipe syntax in the R tidyverse. The whole thing is pretty comprehensive: it covers file io for currently seven tabular file formats (with two more very close), all of the query operations (like dplyr) and plotting (not far along, but I’m optimistic that we can have something that feels ggplot like soon). It all builds on the current implementation of l>...

I should say I’m all in favor of keeping the options for something better for l> on the table. It works ok for what I’ve created so far, but I could easily see a better approach. But just deprecating seems a very radical step that can kind of pull the rug out of a lot of packages.

The other choice is to just make x |> f an alternate syntax for f(x). That would break code that overloads |> but allow code that uses |> for function chaining to keep working while leaving things open for additional enhancements as long as they're compatible with that.

The alternative would be to introduce a new syntactic chaining syntax in the future, but it needs to be something that is currently a syntax error, which is pretty slim pickings at this point.

The alternative would be to introduce a new syntactic chaining syntax in the future, but it needs to be something that is currently a syntax error, which is pretty slim pickings at this point.

Wouldn't my proposal from above allow that? I.e. make |> a syntax error in julia 1.0, and make ||> equivalent to today's |>. For julia 1.0 this would be a minor annoyance for code that currently use |> because one would have to switch over to ||>. But I feel that wouldn't be so bad, plus it could be fully automated. Then, once someone has a good idea for |> it can be reintroduced into the language. At that point there would be both ||> and |> around, and I assume ||> would slowly fade into the background if everyone starts to adopt |>. And then, in a couple of years, julia 2.0 could just remove ||>. In my mind that would a) not cause any real trouble to anyone in the julia 1.0 timeframe, and b) leave all options on the table for a really good solution for |> eventually.

|>(x, f) = f(x)
|>(x, tuple::Tuple) = tuple[1](x, tuple[2:endof(tuple)]...) # tuple
|>(x, f, args...) = f(x, args...) # args

x = 1 |> (+, 1, 1) |> (-, 1) |> (*, 2) |> (/, 2) |> (+, 1) |> (*, 2) # tuple
y = 1 |> (+, 1, 1)... |> (-, 1)... |> (*, 2)... |> (/, 2)... |> (+, 1)... |> (*, 2)... # args

It is not easy to write many times but left to right and doesn't use macro.

function fibb_tuple(n)
    if n < 3
        return n
    end
    fibb_tuple(n-3) |> (+, fibb_tuple(n-2), fibb_tuple(n-1))
end

function fibb_args(n)
    if n < 3
        return n
    end
    fibb_args(n-3) |> (+, fibb_args(n-2), fibb_args(n-1))...
end

function fibb(n)
    if n < 3
        return n
    end
    fibb(n-3) + fibb(n-2) + fibb(n-1)
end

n = 25

println("fibb_tuple")
@time fibb_tuple(1)
println("fibb_args")
@time fibb_args(1)
println("fibb")
@time fibb(1)

println("tuple")
@time fibb_tuple(n)
println("args")
@time fibb_args(n)
println("fibb")
@time fibb(n)
fibb_tuple
  0.005693 seconds (2.40 k allocations: 135.065 KiB)
fibb_args
  0.003483 seconds (1.06 k allocations: 60.540 KiB)
fibb
  0.002716 seconds (641 allocations: 36.021 KiB)
tuple
  1.331350 seconds (5.41 M allocations: 151.247 MiB, 20.93% gc time)
args
  0.006768 seconds (5 allocations: 176 bytes)
fibb
  0.006165 seconds (5 allocations: 176 bytes)

|>(x, tuple::Tuple) = tuple[1](x, tuple[2:endof(tuple)]...) is awful.
|>(x, f, args...) = f(x, args...) needs more letters but fast.

I think that allowing subject |> verb(_, objects) like syntax as verb(subject, objects) means to support SVO (but Julia's default is VSO or VOS). However Julia supports mutltidipatch so the subject can be subjects. I think we should allow (subject1, subject2) |> verb(_, _, object1, object2) like syntax as verb(subject1, subject2, object1, object2) if we introduce SVO syntax.
It is MIMO if it is grasped as pipeline, as @oxinabox noted it.

How about use (x)f as f(x)?
(x)f(y) can be read both as f(x)(y) and f(y)(x) so choose evaluate right first:

(x)f # f(x)
(x)f(y) # f(y)(x)
(x)f(y)(z) # f(y)(z)(x)
(x)(y)f(z) # f(z)(y)(x)
(a)(b)f(c)(d) # f(c)(d)(b)(a)
1(2(3, 4), 5(6, 7), 8(9, 10)) # 1(2(3, 4), 5(6, 7), 8(9, 10))
1 <| (2 <| (3, 4), 5 <| (6, 7), 8 <| (9, 10)) # 1(2(3, 4), 5(6, 7), 8(9, 10)), but 2 <| (3, 4) == 2((3, 4)) so currently emit error
3 |> 2(_, 4) |> 1(_, 5(6, 7), 8(9, 10)) # 1(2(3, 4), 5(6, 7), 8(9, 10))
((3)2(_, 4))1(_, 5(6, 7), 8(9, 10)) # 1(2(3, 4), 5(6, 7), 8(9, 10))
(3, 4) |> 2(_, _) |> 1(_, 5(6, 7), 8(9, 10)) # 1(2(3, 4), 5(6, 7), 8(9, 10))
((3, 4)2)1(_, 5(6, 7), 8(9, 10)) # 1(2(3, 4), 5(6, 7), 8(9, 10))

This can manipulate vararg clearly.
But it will break binary operators without spaces:

(a + b)+(c + d) # +(c + d)(a + b) == (c + d)(a + b): Error

Alternative option: add another case for the splatting syntax. Have f...(x) desugar to (args...)->f(x,args...)

This would enable syntactically lightweight (manual) currying:

#Basic example:
f(a,b,c,d) = #some definition
f...(a)(b,c,d) == f(a,b,c,d)
f...(a,b)(c,d) == f(a,b,c,d)
f...(a,b,c)(d) == f(a,b,c,d)
f...(a)...(b)(c,d) == f(a,b,c,d) # etc etc

# Use in pipelining:
x |> map...(f) |> g  |> filter...(h) |> sum

When do you stop currying? Julia functions don't have fixed arity.

Julia has a splatting operator already. What I'm proposing would have exactly the same behaviour as the current splat operator.

I,e: f...(x) == (args...)->f(x,args...) is sugar for making a lambda with splatting.

That definition always gives you a function object. Presumably you sometimes want an answer.

You get that by explicitly calling the object at the end. For example, note the lack of ... before the last set of parentheses in my last example f...(a)...(b)(c,d) == f(a,b,c,d) .

You can also call the returned function object with |>, which makes it nice for piping.

@saolof

Basic example:

f(a,b,c,d) = #some definition
f...(a)(b,c,d) == f(a,b,c,d)
f...(a,b)(c,d) == f(a,b,c,d)
f...(a,b,c)(d) == f(a,b,c,d)
f...(a)...(b)(c,d) == f(a,b,c,d) # etc etc

Good intuition for using splat with function chaining but too complex in my sense.
You made multiple application at one point of the chain.
In function chaining, you make one application step by step all along the chain.

And @StefanKarpinski is right, you dont know when to stop to apply functions over them-selves and finally apply them to a more scalar item.

--(clipped)--

Sorry, that's what rather pointless and unreadable.
See my second msg below to get clearer explanation (i hope).

Given how functional Julia is already, I quite like @saolof's idea of a function-curry operator. I don't really understand where the semantic objections are coming from, as it seems like this has a very obvious interpretation.

You can even prototype it from the comfort of your own repl:

ctranspose(f) = (a...) -> (b...) -> f(a..., b...)

map'(+)(1:10)

map'(+)'(1:10, 11:20)(21:30)

(+)'(1,2,3)(4,5)

1:10 |> map'(x->x^2) |> filter'(iseven)

Has kind of a nice feel to it, I think.

Edit: Feels like this could also be the path to generalising this more. If we can write map∘(+, 1:10) then we can write map∘(_, 1:10) to place the curried argument first, and the curry opertor determines the scope of the lambda, solving the biggest problem for such general currying.

Eh, that's clever, @MikeInnes.

I love how Julia's extreme extensibility shows off here too. The unifying solution to a very wide range of requirements for function chaining turns out to be abusing ctranspose...

(clarification: I'm getting 1:10 |> map'(x->x^2) |> filter'(iseven) with this proposal, so I'm 💯% for it!)

To be clear, I don't think we should actually abuse the adjoint operator for this, but it's a good proof of concept that we can have a concise function currying notation.

Maybe we should introduce a new unicode operator? http://www.fileformat.info/info/unicode/char/1f35b/index.htm

(Sorry...)

I feel like _'s are still a much more flexible way to make lambdas

@bramtayl I think the idea in MikeInnes's edit to his post is that the two can coexist — standaline underscores as in @stevengj's pull request would work, standalone currying as in Mike's idea above would work, and combining the two would also work, allowing you to use the currying operator to delimit the scope of _s inside it.

ah got it

That makes it not too different from LazyCall.jl

On a more serious note:

To be clear, I don't think we should actually abuse the adjoint operator for this

Probably a sound choice. However, I would like to voice my hopes that if such a solution is implemented, it is given an operator which is easy to type. The ability to do something like 1:10 |> map'(x->x^2) is significantly less useful if whatever character replaces ' requires me to look it up in a unicode table (or use an editor which supports LaTeX-expansions).

Rather than abusing the adjoint operator, we could reuse the splat one.

  • in a (linear) piping context
  • inside, in a function call

    • do splat before rather than after

so

  • splat can induce a missing iterator arg

A kind of high order splat, (with anacrusis if there is some musician there).
Hoping it shoud not shake too much the language.

EXAMPLE

1:10
    |> map(...x->x^2)
    |> filter(...iseven)

EXAMPLE 2

genpie = (r, a=2pi, n=12) ->
  (0:n-1) |>
      map(...i -> a*i/n) |>
      map(...t -> [r*cos(t), r*sin(t)]) 

could stand for

elmap = f -> (s -> map(f,s))

genpie = (r, a=2pi, n=12) ->
  (0:n-1) |>
      elmap(i -> a*i/n) |>
      elmap(t -> [r*cos(t), r*sin(t)]) 

Not sure if this belongs here, since the discussion has evolved to more advanced/flexible chaining and syntax... but back to the opening post, function chaining with dot syntax seems possible right now, with a little extra setup. The syntax is just a consequence of having dot syntax for structs along with first-class functions/closures.

mutable struct T
    move
    scale
    display
    x
    y
end

function move(x,y)
    t.x=x
    t.y=y
    return t
end
function scale(c)
    t.x*=c
    t.y*=c
    return t
end
function display()
    @printf("(%f,%f)\n",t.x,t.y)
end

function newT(x,y)
    T(move,scale,display,x,y)
end


julia> t=newT(0,0)
T(move, scale, display, 0, 0)

julia> t.move(1,2).scale(3).display()
(3.000000,6.000000)

The syntax seems very similar to conventional OOP, with a quirk of "class methods" being mutable. Not sure what the performance implications are.

@ivanctong What you've described is something more akin to a fluent interface than function chaining.

That said, solving the issue of function chaining more generally would have the added benefit of also being usable for fluent interfaces. While it is certainly possible to make something like a fluent interface using struct members in Julia currently, it strikes me as very much going against the spirit and design aesthetic of Julia.

The way elixir does it where the pipe operator always passes in the left-hand side as the first argument and allows extra arguments afterward, has been pretty useful, I would love to see something like "elixir" |> String.ends_with?("ixir") as a first class citizen in Julia.

Other languages define it as Uniform Function Call Syntax.
This feature offers several advantages (see Wikipedia), it would be nice if Julia support it.

So is there a fluent interface to Julia at this point?

Please post questions to the Julia discourse discussion forum.

In a fit of hacking (and questionable judgement!?) I've created another possible solution to the tightness of binding of function placeholders:

https://github.com/c42f/MagicUnderscores.jl

As noted over at https://github.com/JuliaLang/julia/pull/24990, this is based on the observation that one often wants certain slots of a given function to bind an _ placeholder expression tightly, and others loosely. MagicUnderscores makes this extensible for any user defined function (very much in the spirit of the broadcast machinery). Thus we can have such things as

julia> @_ [1,2,3,4] |> filter(_>2, _)
2-element Array{Int64,1}:
 3
 4

julia> @_ [1,2,3,4] |> filter(_>2, _) |> length
2

"just work". (With the @_ obviously going away if it's possible to make this a general solution.)

Some variation @MikeInnes suggestion would seem adequate for my needs (usually long chains of filter, map, reduce, enumerate, zip etc. using do syntax).

c(f) = (a...) -> (b...) -> f(a..., b...)

1:10 |> c(map)() do x
    x^2
end |> c(filter)() do x
    x > 50
end

This works, although I can't get ' to work anymore. It is slightly shorter than:

1:10 |> x -> map(x) do x
    x^2
end |> x -> filter(x) do x
    x > 50
end

Also I guess one could just do

cmap = c(map)
cfilter = c(filter)
cetc = c(etc)
...

1:10 |> cmap() do x
    x^2
end |> cfilter() do x
    x > 50
end |> cetc() do ...

As of 1.0 you'll need to overload adjoint instead of ctranspose. You can also do:

julia> Base.getindex(f::Function, x...) = (y...) -> f(x..., y...)

julia> 1:10 |> map[x -> x^2] |> filter[x -> x>50]
3-element Array{Int64,1}:
  64
  81
 100

If we could overload apply_type then we could get map{x -> x^2} :)

@MikeInnes I just stole that

A late and slightly frivolous contribution -- how about piping data to any location in the argument list using a combination of left and right curry operators:

VERSION==v"0.6.2"
import Base: ctranspose, transpose  
ctranspose(f::Function) = (a...) -> ((b...) -> f(a..., b...))  
 transpose(f::Function) = (a...) -> ((b...) -> f(b..., a...))

"little" |> (*)'''("Mary ")("had ")("a ") |> (*).'(" lamb")

Clojure has some nice threading macros. Do we have those in the Julia ecosystem somewhere?

Clojure has some nice threading macros. Do we have those in the Julia ecosystem somewhere?

https://github.com/MikeInnes/Lazy.jl

Clojure has some nice threading macros. Do we have those in the Julia ecosystem somewhere?

we have at least 10 of them.
I posted a list further up in the thread.
https://github.com/JuliaLang/julia/issues/5571#issuecomment-205754539

Can you edit the list to have LightQuery instead of the other two packages of mine?

Since the |> operator come from elixir why not take inspiration from one of the ways they have to create anonymous functions ?
in elixir you can use &expr for defining a new anonymous function and &n for capturing positional arguments (&1 is the first arguments, &2 is the second, etc.)
In elixir there are extra stuff to write, (for example you need a dot before the parenthesis to call an anonymous function &(&1 + 1).(10))

But here's what it could look like in julia

&(&1 * 10)        # same as: v -> v * 10
&(&2 + 2*&5)      # same as: (_, x, _, _, y) -> x + 2*y
&map(sqrt, &1)    # same as: v -> map(sqtr, v)

So we can use the |> operator more nicely

1:9 |> &map(&1) do x
  x^2
end |> &filter(&1) do x
  x in 25:50
end

instead of

1:9 |> v -> map(v) do x
  x^2
end |> v -> filter(v) do x
  x in 25:50
end

note you can replace line 2 and 3 by .|> &(&1^2) or .|> (v -> v^2)

The main difference with the propositions with _ placeholder is that here it is possible to use positional arguments, and the & in front of the expressions makes the scope of the placeholders obvious (to the reader and the compiler).

Note that I have taken & in my examples, but the use of ?, _, $ or something else instead, would not change anything to the case.

Scala uses _ for the first argument, _ for the second argument, etc., which is concise, but you quickly run out of situations where you can apply it (can't repeat or reverse the order of arguments). It also doesn't have a prefix (& in the suggestion above) which disambiguates functions from expressions, and that, in practice, is another issue that prevents its use. As a practitioner, you end up wrapping intended inline functions in extra parentheses and curly brackets, hoping that it will be recognized.

So I'd say that the highest priority when introducing a syntax like this is that it be unambiguous.

But as for prefixes for arguments, $ has a tradition in the shell scripting world. It's always good to use familiar characters. If the |> is from Elixir, then that could be an argument to take & from Elixir, too, with the idea that users are already thinking in that mode. (Assuming there are a lot of former Elixir users out there...)

One thing that a syntax like this can probably never capture is creating a function that takes N arguments but uses fewer than N. The $1, $2, $3 in the body implies the existence of 3 arguments, but if you want to put this in a position where it will be called with 4 arguments (the last to be ignored), then there isn't a natural way to express it. (Other than predefining identity functions for every N and wrapping the expression with one of those.) This isn't relevant for the motivating case of putting it after a |>, which has only one argument, though.

I extended the @MikeInnes trick of overloading getindex, using Colon as if functions were arrays:

struct LazyCall{F} <: Function
    func::F
    args::Tuple
    kw::Dict
end

Base.getindex(f::Function,args...;kw...) = LazyCall{typeof(f)}(f,args,kw)

function (lf::LazyCall)(vals...; kwvals...)

    # keywords are free
    kw = merge(lf.kw, kwvals)

    # indices of free variables
    x_ = findall(x->isa(x,Colon),lf.args)
    # indices of fixed variables
    x! = setdiff(1:length(lf.args),x_)

    # the calling order is aligned with the empty spots
    xs = vcat(zip(x_,vals)...,zip(x!,lf.args[x!])...)
    args = map(x->x[2],sort(xs;by=x->x[1]))

    # unused vals go to the end
    callit = lf.func(args...,vals[length(x_)+1:end]...; kw...)

    return callit
end

[1,2,3,4,1,1,5]|> replace![ : , 1=>10, 3=>300, count=2]|> filter[>(50)]  # == [300]

log[2](2) == log[:,2](2) == log[2][2]() == log[2,2]()  # == true

It is much slower than lambdas or threading macros, but I think its super cool :p

To remind people commenting here, do have a look at relevant discussion at https://github.com/JuliaLang/julia/pull/24990.

Also, I'd encourage you to try out https://github.com/c42f/Underscores.jl which gives a function-chaining-friendly implementation of _ syntax for placeholders. @jpivarski based on your examples, you may find it fairly familiar and comfortable.

Was this page helpful?
0 / 5 - 0 ratings