This is a pretty common and useful idiom that I believe has two problems:
f(::typeof(+))
is to select for the call f(+)
, but it only works if +
is a singleton. E.g. f(::typeof([1]))
is just a strange way to write f(::Vector{Int})
, and will clearly not select only f([1])
.Some syntax would fix this. The two ideas that come to mind are
function f(===(+)) end
or
function f(:=(+)) end
An argument name could be specified too, e.g. g(f === +) = ...
. We could also enable unary parsing of those operators, allowing f(===dot) = ...
.
That would mean the method should only be called if the argument is ===
to the specified value. Of course, we currently only support that for singleton objects, allowing us to give an error if it's not possible to dispatch on the object you want, instead of silently defining a method for a wider type. In the future, we'd have the option of allowing dispatch on more kinds of values, but that of course is a separate issue.
Modified the title because I originally thought this was about having a shorthand for writing
tmp = <expr>
x::typeof(tmp)
E.g. one could have x := <expr>
as a shorthand for that and it would fix the type of x
locally.
On the original subject, one of the things that people have often wanted is the ability to write:
fib(0) = 1
fib(1) = 1
fib(n) = fib(n-1) + fib(n-2)
For that we could special case literals as arguments.
Of course that doesn't help this situation since +
is not a literal and f(+)
is perfectly valid function signature already. I see the idea but g(f === +)
and g(=== +)
just look very awkward to me.
f(::typeof(+))=...
is to make a method where that accepts things the same type as +
. It falls directly out of how I understand typeof
and dispatch to work. Since you aren't proposing breaking f(::typeof([1]))
, I don't think any new syntax could prevent people from writing f(::Vector{Int})
in that way.The two example with =
in them look like value dispatch to me.
Regarding 2, @JeffBezanson are you proposing adding the ability to do value dispatch (i.e. a limited form of predicate dispatch on being ===
to a given value)? If so, that seems like a big change.
Regarding 2, @JeffBezanson are you proposing adding the ability to do value dispatch (i.e. a limited form of predicate dispatch on being
===
to a given value)? If so, that seems like a big change.
I wrote:
In the future, we'd have the option of allowing dispatch on more kinds of values, but that of course is a separate issue.
So I'm not advocating adding it now, just pointing out that it would be a compatible future extension.
The two example with
=
in them look like value dispatch to me.
Yes, it is value dispatch.
2\. Since you aren't proposing breaking `f(::typeof([1]))`,
Correct; f(::typeof(x))
still has a useful meaning that you might want and I wouldn't break it. This would just add the option to clarify that you're expecting to intercept a particular value.
random idea: discarding the rhs to mean typeof(rhs)
:
whoami(+::_) = "I am plus"
discarding the rhs to mean
typeof(rhs)
So the rule would be f(g::_) = ...
means f(_::typeof(g)) = ...
?
How about f($+) = ...
? The idea is that "argument $x
must be the singleton value of x
at the definition-time."
The syntax f(=x)
is available...
@StefanKarpinski: what does f(+)
mean as a function signature?
Same as f(x)
, except the parameter name is +
, not x
?
I dislike the idea of adding new syntax just for working around the (moderate) ugliness of f(::typeof(+))
. Julia already has two separate mechanisms for limited value dispatch (the other being f(::Val{+})
, which I was surprised to find out to also work with non-bits-types).
If new syntax is introduced, I would argue that it should enable proper value dispatch. I like @tkf's idea of using $
for this, though maybe it could be extended upon to be more generally useful:
f($5) = "five" # specialize on literal value
const x = 6
f($(x)) = "six" # specialize on value of variable
f($(+)) = "plus" # equivalent to f(::typeof(+))
f($x) = x # always dispatch on value of x, equivalent to f(::Val{x}) = x, but nicer
Though admittedly I have no idea whether this syntax is still available :smile:
Using prefix $
for anything other than unquote makes quote
very hard to use.
The $
is already pretty heavily overloaded with three different splicing behaviors (string, command, expression), so overloading it further is probably not ideal. Note that defining f(::Val{x})
doesn't allow you to write f(x)
and call that method, you have to wrap it in Val
:
julia> f(::Val{3}) = "got it"
f (generic function with 1 method)
julia> f(3)
ERROR: MethodError: no method matching f(::Int64)
Closest candidates are:
f(::Val{3}) at REPL[2]:1
Stacktrace:
[1] top-level scope at REPL[3]:1
julia> f(Val(3))
"got it"
Actual value dispatch, if it were implemented, would allow you to write something like this:
julia> f(===3) = "got it"
f (generic function with 1 method)
julia> f(3)
"got it"
Let's consider the concept of more general value dispatch (and as we'll see, this leads us somewhat inevitably to predicate dispatch) more seriously, to see where this road might lead us, which reflects back on a potential design. The nice thing about using ===
is that it means what it says:
f(x === 3) = "got it"
means "if x === 3
then call this method". Would we ever want to generalize that? I'm not sure, but if we did, we might want to write something like this:
fib(n ∈ (0, 1)) = 1
fib(n) = fib(n-1) + fib(n-2)
But what if you also want to constrain the type? Them maybe you can write this:
fib(n::Int ∈ (0, 1)) = 1
fib(n::Int) = fib(n-1) + fib(n-2)
Does that ∈
check do ===
or ==
like ∈
normally does? If we introduce value dispatch, it's not going to take long before people are unhappy about the fact that it only supports ===
and will want ==
"value dispatch" too, at which point you are actually just doing general predicate dispatch. Once you start going predicate dispatch, you need to mix it with type annotations as well, which starts to get ugly quite quickly. For example, sticking with the fib
example, this is beautiful:
fib(0) = 1
fib(1) = 1
fib(n) = fib(n-1) + fib(n-2)
However, if we were to consider fib(<literal>) = ...
syntax for the originally proposed fib(n === <literal>) = ...
then this would only work for Int
arguments, not any other kind of integer value, which is very unjulian. Note that this ===
versus ==
consideration doesn't come up for value dispatch on functions because there's no meaningful ==
for functions, only ===
to which ==
falls back. In the `fib1 case it's tempting to write this as:
fib(0::Integer) = 1
fib(1::Integer) = 1
fib(n::Integer) = fib(n-1) + fib(n-2)
However, at that point, it's either shorthand for predicate dispatch on ==
or the type annotations are useless.
If one were going to consider predicates, putting them in the argument position along with potential type information is inevitably going to get a bit crowded. But we also have the where
clause which could read quite naturally for any number of predicates:
fib(n::Integer) where {n == 0} = 1
fib(n::Integer) where {n == 1} = 1
fib(n::Integer) = fib(n-1) + fib(n-2)
The fib(0::Integer) = 1
form could even potentially be a shorthand for this. You could potentially combine the first two into a single method with a more complex predicate:
fib(n::Integer) where {n ∈ (0, 1)} = 1
fib(n::Integer) = fib(n-1) + fib(n-2)
There's one very significant problem with the whole predicate thing right off the bat: you need to check the predicates in some order and unless the predicates are mutually exclusive, that order matters. This would be a deviation from the current language design where (except for one issue that we'd like to fix in 2.0), method definition order does not matter. Of course, we can just say that methods are checked in order of (specificity, reverse definition) and the first one that matches is called. In other words, sort methods by specificity first, then for methods with the same specificity by reverse definition order, so most recent first, and call the first one that matches. Why reverse order of definition? Because if you define a newer method that complete occludes and older method with the same specificity, the compiler might not be able to prove that, but you still want the newer one to replace the older one. It's still hanging around in the method table, but it will never be called.
Anyway, having gamed out a bit more of that, I think we should not add any form of value dispatch.
Pure ===
value dispatch is going to be annoying for any type where ===
is not the only sensible way to compare values, so introducing ===
value dispatch is only going to lead to people wanting to do ==
"value" dispatch, which is actually just general predicate dispatch because ==
is completely user-defined and arbitrary.
If you do allow predicate dispatch, there is, imo, a pretty good syntax for it using the where
clause, but there are other bigger problems:
===
value dispatch).@pure
? Yikes.As a consequence of the compiler not being able to reason about predicates, you may, through interactive use end up with uncallable methods littering your method tables, gumming up the works. So in short, I feel like adding any form of value dispatch that isn't explicitly expressed in terms of types leads us down a dangerous path towards general predicate dispatch, which has major problems.
I don't buy the slippery slope argument. For example, in Val{x}
only certain types of x
are allowed, and people are used to that, and we haven't been expanding it. I also really don't think we need to worry about general predicate dispatch here. I totally agree with your assessment of it, I just think it's very easy to avoid adding the feature. The line is quite clear.
Defining things like fib(0) = 1
is cute, but I gather that the general experience from languages that have it is that it has few practical applications.
I like the syntax idea f(x) where {x === +}
. The key point is that we already support value dispatch for values that are singletons, we just don't have syntax that expresses that intent.
that
==
is the sensible way to compare values
There's is a way out of this difficulty: by lowering directly to something we already have (much as how ComputedFieldTypes.jl already implements #18466 or that keyword args are syntax for doing getfield
at the top of a function). The proposed where
syntax looks reasonable to me for this. It's breaking to want both meaning (==
is generally incompatible with ===
) of the same syntax.
The resulting function would have the aforementioned issue with order dependence to resolve ambiguities. But ignoring that for now, at some point we could turn this:
fib(n) where {n ∈ (0, 1)} = 1
fib(n) where {n = 2} = 2
fib(n) = fib(n-1) + fib(n-2)
end
into this:
function fib(n)
(n ∈ (0, 1)) && return 1
(n == 2) && return 2
#= else =# return fib(n-1) + fib(n-2)
end
(we could have a macro for simulating this, or perhaps Match.jl essentially already has a macro for this)
I don't buy the slippery slope argument.
I normally don't buy slippery slope arguments, but this is a very specific very slipper rock: i.e. that ===
for most kinds of values doesn't do what people want, whereas ==
does. I feel that adding ===
value dispatch will just make people want ==
value dispatch more and make it harder to explain why they can't have it.
For example, in
Val{x}
only certain types ofx
are allowed, and people are used to that, and we haven't been expanding it. I also really don't think we need to worry about general predicate dispatch here.
This is very clearly in the type domain though, not in the value domain. It supports values that can be type parameters—no more, no less—which is simple to explain and easy to understand. Once you start expressing this in the value domain, I think that it will be much harder to understand why only value judgements that can be translated into the type domain are allowed.
I like the syntax idea
f(x) where {x === +}
.
Well, that's good :). I think it has the benefit that the predicate occurs in a place where people are used to only type-oriented things going, which makes it a bit easier to explain why only <:
and ===
and maybe ::
are allowed.
But ignoring that for now, at some point we could turn this:
The idea of requiring putting all the methods in a single block is interesting.
I'm trying to understand the limitations of the existing value type system. The manual says that the type parameter of Val
can be any bits type. However, it seems that functions and operators can also make value types:
f(::Val{sin}) = 1
f(::Val{+}) = 2
f(::typeof(sin)) = 3
f(::typeof(+)) = 4
@assert f(Val{sin}()) == 1
@assert f(sin) == 3
...
All of these work as expected, which surprises me, since functions are presumably not bits types. Are they being interpreted as symbols when used as a type parameter?
Also, if I try to pass a lambda as type parameter, the method is created just fine, but cannot be called:
f(::Val{x->2x}) = 5
f(::typeof(x->2x}) = 6
f(Val{x->2x}()) # error
f(x->2x) # error
Could someone explain these behaviors? And also whether there is any difference between dispatching on ::Val{+}
vs ::typeof(+)
(other than less verbose notation)?
Typical functions all have their own types. sin
is the only instance of typeof(sin)
; it's a singleton. The value contains no data (sizeof(sin) == 0
) and is immutable, so it's a bits type.
Each anonymous function has a type generated for it:
julia> typeof(x->x), typeof(x->x)
(getfield(Main, Symbol("##3#5")), getfield(Main, Symbol("##4#6")))
so they're different.
f(::Val{+})
will only match the call f(Val(+))
. f(::typeof(+))
matches f(+)
. There is nothing special about Val
; it's just a struct with one type parameter and no fields used by convention when you want to pass some information at the type level.
Thanks for the explanation!
Also note that f(::Val{x})
will match any call f(Val(y))
where y === x
. f(::typeof(x))
is totally different, since the value of x
is not relevant, only its type. That's a big part of why I find that idiom confusing.
What do you mean by this? I'm getting
f(::typeof(+)) = 1
g = +
@assert g === +
@assert f(g) == 1
Same as with Val{+}
.
Edit: Never mind. That's just another instance of functions being singletons.
f(x) where {x === +}
looks nice and I understand that fib(n::Integer) where {n == 0} = 1
wouldn't be natural to be supported. But yet, it'd make me expect something like
f(x::StaticVector{n}, y::StaticVector{m}) where {n === 2m} = ...
would work. But I guess that's essentially #18466.
Typical functions all have their own types. sin is the only instance of typeof(sin); it's a singleton. The value contains no data (sizeof(sin) == 0) and is immutable, so it's a bits type.
Interesting clarification @ https://github.com/JuliaLang/julia/issues/32541#issuecomment-510179255
In fact Val
is not a bits type by itself
@test isbitstype(Val{T} where T) == false
But becomes one when is arg is specified (even with a non bits param like a symbol)
@test sizeof(Val{:a}) == 0 # has no data
@test Val{:a}() == Val{:a}() # is singleton
@test Base.unwrap_unionall(Val).mutable == false # is immutable
@test isbitstype(Val{:a})
@test isbits(:a) == false # param is not bits
So the compiler could statically optimize dispatch
That has puzzled me times ago
Val
is an abstract type so it can't be a bits type.
I was guessing it was due to something like that.
But had no enough times to always google the docs!
Thanks
I dont want to hijack this topic, but testing val with the predicates of reflection.jl v1.3.0-rc4
using Printf
foreach([
Base.isimmutable,
Base.isstructtype,
Base.isprimitivetype,
Base.isbitstype,
Base.isbits,
Base.isdispatchtuple,
Base.iskindtype,
Base.isconcretedispatch,
Base.isdispatchelem,
Base.isconcretetype,
Base.isabstracttype,
Base.issingletontype,
]) do f
@printf("%-30s => %c\n", "$f(Val)", (f(Val) ? 'x' : '-'))
end
produces
isimmutable(Val) => x
isstructtype(Val) => x
isprimitivetype(Val) => -
isbitstype(Val) => -
isbits(Val) => -
isdispatchtuple(Val) => -
iskindtype(Val) => -
isconcretedispatch(Val) => -
isdispatchelem(Val) => -
isconcretetype(Val) => -
isabstracttype(Val) => -
issingletontype(Val) => -
So i don't get why Val
should be considered an abstract type.
I perceive it as an unionall (parametric struct) without fields (singleton, no data) that becomes concrete into a bits type since it's immutable too.
--
Here are the results for `Val{:a}'
foreach([
Base.isimmutable,
Base.isstructtype,
Base.isprimitivetype,
Base.isbitstype,
Base.isbits,
Base.isdispatchtuple,
Base.iskindtype,
Base.isconcretedispatch,
Base.isdispatchelem,
Base.isconcretetype,
Base.isabstracttype,
Base.issingletontype,
]) do f
@printf("%-30s => %c\n", "$f(Val{:a})", (f(Val{:a}) ? 'x' : '-'))
end
isimmutable(Val{:a}) => -
isstructtype(Val{:a}) => x
isprimitivetype(Val{:a}) => -
isbitstype(Val{:a}) => x
isbits(Val{:a}) => -
isdispatchtuple(Val{:a}) => -
iskindtype(Val{:a}) => -
isconcretedispatch(Val{:a}) => x
isdispatchelem(Val{:a}) => x
isconcretetype(Val{:a}) => x
isabstracttype(Val{:a}) => -
issingletontype(Val{:a}) => x
I dont want to hijack this topic,
Please continue at the discourse thread you've already started.
So i don't get why Val should be considered an abstract type.
The word "abstract type" is ambiguous, much like bitstype did before one of the meanings was renamed. That's why in the other thread I said it's not a concrete type instead.
It's not really ambiguous: a type T
is concrete if there can exist a value such that typeof(x) == T
; an abstract type is one that is not concrete. Val
without a parameter filled in is abstract.
Extending the single block idea, how about these:
(For a more complex example see further down)
#here the advantage is that we have a guaranteed common signature
piecewise fib(n::T)::T where {T<:Integer}
fib(0) = 1
fib(1) = 1
fib(n) = fib(n-1) + fib(n-2)
end
By scanning for literals (or even unbound values) this could be "easily" transformed into:
function fib(n::T)::T where {T<:Integer}
n===0 && return 1
n===1 && return 1
n===n && return fib(n-1) + fib(n-2) # n===n cannot fail
end
The bad thing though is, that in more argument cases the order of the statements may be important except if we will do full ambiguity parsing (which might not be possible in the general case, I guess).
(earlyout
probably also would be a well-reflecting keyword)
Optimization potential:
Eventually we can later easily opt-in to literal optimizations (like for literal pow) by adding
#inline all
var"literal_fib"(::typeof(fib), ::Val{0}) = 1
var"literal_fib"(::typeof(fib), ::Val{1}) = 1
#catch-all, similar to n===n
var"literal_fib"(f::typeof(fib), n) = f(valextractor(n))
where we need those
@inline valextractor(x::Val{v}) where v = v
@inline valextractor(x) = x
and doing the corresponding replacements fib(5)->fib(Val(5))
while fib(n)->fib(n)
.
Of course that would add some special treatment to numbers/literals so whether this would be wanted, is another question. Though if there was such thing then literal_pow
wouldn't be a special case anymore (since everyone can opt-in) and the way it works will be more common. (-> Less https://github.com/JuliaLang/julia/issues/28685)
If we wanted the option for such literal optimization we should add some flags to specify which branches should go into literal-only optimization or both. I'll use const
here as marker for 'literal-only'.
By that proposal, the ^(::T, ::T) where T<:Integer
function could be rewritten:
piecewise ^(x::T, p::T)::T where {T<:Integer}
const ^(x, -1) = inv(x) #marked for literal-only
const ^(x, 0) = one(x)
const ^(x, 1) = copy(x)
const ^(x, 2) = x*x
^(x, p) = power_by_squaring(x,p)
end
which would turn into:
function ^(x::T, p::T)::T where {T<:Integer}
#(x,p)===(x, -1) && inv(x) #as we marked this branch it won't make it into the runtime variant
#(x,p)===(x, 0) && return one(x)
#(x,p)===(x, 1) && return copy(x)
#(x,p)===(x, 2) && return x*x
(x,p)===(x, p) && return power_by_squaring(x,p) #this === cannot fail
end
of course, if we want more runtime optimizations we can remove the const
in front of some branches.
and once opting-in, we get:
#inline all
var"literal_^"(::typeof(^), x, ::Val{-1}) where {x,p} = inv(x)
var"literal_^"(::typeof(^), x, ::Val{0}) where {x,p} = one(x)
var"literal_^"(::typeof(^), x, ::Val{1}) where {x,p} = copy(x)
var"literal_^"(::typeof(^), x, ::Val{2}) where {x,p} = x*x
#catch-all, till here are branches where === can fail
var"literal_^"(f::typeof(^), x, p) = f(valextractor.(x,p)...)
OT:
It's not really ambiguous: a type
T
is concrete if there can exist a value such thattypeof(x) == T
; an abstract type is one that is not concrete.Val
without a parameter filled in is abstract.
As you state clearly how to understand abstract, where is my mistake when I think that these don't match:
You say Val
is abstract, but isabstracttype(Val) != true
.
More generally as I understand you, every type that is not concrete is abstract aka they are complementary?
And what's the relation between UnionAll and "abstractness"?
The only understanding I can come up with so far is:
isabstracttype: only exactly what is declared abstract when defining the type
isconcretetype: every type that is instanceable (even if that'd mean that some values will be "boxed", -> 5::Integer)
The original problem would resolve like that:
piecewise f(binaryop, a, b)
f(+, a, b) = a+b
f(-, a, b) = a-b
f(mapreduce, a, b) = mapreduce(a,+,b) #sure, this is arbitrary
f(binaryop, a, b) = binaryop(a,b)
end
Though, I now see, we need some way to ensure that the case that nothing matched is handled. *see Edit
For a single branch the above solution feels like an overkill. Maybe allow direct usage of operators after piecewise
keyword like:
piecewise f(+, a, b)
return add(a,b)
end
This wouldn't be problematic as we can distinguish depending on the structure of the body (if there are "top-level" definitions). If we have such a single line case, then we would have two possible solutions:
Base.issingletontype
and replace them with arguments of the narrow singleton type. *see Editpiecewise
with different constant Int values will collide.function f(_1_::typeof(+), a, b)
_1_===+ && return a+b #if we use the first solution we may even omit the check, as it is a singleton
throw(...)
end
EDIT:
Maybe one could think about restricting the signature types to the Union of the branch arguments. Then a default branch/the name of the argument in the signature would mean -> use the type of the signature. That way the single branch piecewise could be transformed into a general piecewise automatically and straigtforward.
Most helpful comment
Using prefix
$
for anything other than unquote makesquote
very hard to use.