Julia: v0.5 "cannot add methods to an abstract type" when overriding call

Created on 3 Feb 2016  Â·  92Comments  Â·  Source: JuliaLang/julia

I'm updating ApproxFun and this caught me by surprise:

abstract Foo
julia> (f::Foo)(x)=x
ERROR: cannot add methods to an abstract type
 in eval(::Module, ::Any) at ./boot.jl:267

Is this intentional? Is there a way to get around this other than define for each subtype:

julia> immutable CFoo <: Foo end
julia> (f::CFoo)(x)=x
regression

Most helpful comment

Personally I think it's a mistake to wait until 1.x to fix this bug: while it's not necessarily a crucial bug to fix, it's a confusing restriction that when hit, makes Julia feel "weird"

All 92 comments

Yes, unfortunately this is the only capability I could not figure out how to cleanly preserve in #13412. How bad is this for you?

Not bad actually (only 3 concrete subtypes), but thought I’d double check before working around it.

We are hitting this. What is excluded here?

Previously, to define a function for all types that are subtypes of a given abstract type, this would be precisely the syntax used. That seems like a rather fundamental part of Julia.

Or is it only when this syntax is used in combination with an object call overload that is excluded?

Losing that would be rather a blow to us. As you may know, we cannot store all the information we need to define mathematical rings in the type, so we create objects that stand in as types, e.g.

R, x = PolynomialRing(ZZ, "x")

where R is now an object that contains information about the ring of all polynomials in ZZ[x].

However, the mathematical abstraction of a polynomial ring is implemented via a bunch of Julia types, all of which are subtypes of some abstract type.

So it's natural to define methods for all of them at once by overloading the call syntax, e.g.

R() returns 0 in the given ring, or R(1) coerces the integer 1 into the ring.

Before panicking I'd like to understand the change a bit better, and what workarounds there are. But this seems to be a bit of a worrisome and unexpected change.

I thought call just changed syntax: https://github.com/JuliaLang/julia/commit/0778a89ddfdbb3098cc42b15e86f4cc4c610f449#diff-5cb043c25dee87c0787a9e1bbbf935baL17

I think I understand what is happening now. Rather than someone replying in detail, perhaps I can ask if my understanding is correct. I might also take the opportunity to discuss a very related issue which we are hitting, in case it is useful to know about (see further below).

I think call is changing syntax and rather than only being defined for types, it is still defined for objects. The comment in the PR about deprecating call and putting method tables inside types instead refers to _implementing_ that which is formerly known as "call" by putting method tables inside types. This doesn't mean call will be restricted to types, but that objects of a given type will have their call syntax implemented _by_ putting method tables inside the corresponding types. The advantage will be faster anonymous methods. Is this correct?

The only thing disappearing is the former ability to define a method for a given class of types, i.e. all types which are subtypes of a given abstract type. Is this because "call" is now a method and not a generic function?

One place where this will definitely affect us badly is when we want to write a single method that overloads call for all elements of all polynomial rings. E.g. the user may create at runtime R = ZZ[x], S = R[y], T = S[z]. The user may then create polynomials f, g, h each in one of these rings R, S or T. Each of the polynomials f, g, h necessarily has a different type, though all the types belong to the abstract type PolyElem{T} (where T is the type of the coefficients).

We use call to enable things like f(123) to give the value of the polynomial at 123, or f(M) to substitute a matrix M into the polynomial f, etc. It has to be a generic function because the user could in theory substitute absolutely anything into f, g or h.

It's therefore a very serious breakage that we can't overload call for all of these at once using call(R::PolyElem{T}, a).

I would also be quite concerned if a similar thing happened for overloading the array syntax []. We currently write generic functions to overload [] for our objects R, S, T above so that for example U = R["x"] is possible. This creates a polynomial ring over R with variable "x". We use it similarly for matrices.

On a related note, I'd like to take the opportunity to discuss an issue which we hit which we have been surprised has not hit anyone else yet. We accept there is likely nothing that can be done about this, but I want to mention it here in case it helps with future design decisions.

Consider the example of matrices over Z/nZ for multiprecision integer n. The initial thought is to have objects of type Z/nZ. As you know, putting n as a parameter of the type is not really possible, nor desirable (otherwise it would trigger recompilation again and again in multimodular algorithms that used many different values of n).

Therefore, the only option is to have the n in an object, not in the type. Let's call the type in question modint. So objects of type modint will somehow have the residue r associated with them, but also in some way the modulus n.

But now consider Julia's generic algorithms for matrices over a given type. Many of these algorithms require one to be able to create a zero object to insert into the matrix. But the zero function in Julia only takes the _type_ modint as a parameter.

The problem is, the type modint contains no information about n, so it is in fact not able to create the zero object of this type, since it cannot set n.

I'm actually really surprised no one has hit this issue anywhere else. For this reason it would be really useful if functions like zero and one that are used so pervasively in Julia matrix algorithms could take more information from somewhere.

Note that the n really can't be stored in the type itself, since then things would need recompiling for every n.

As things are, we are not able to use any of the Julia generic matrix functionality in Nemo.

The only solution that I can see is to explicitly support using objects in the place of types in Julia, so that things like zero(R) will be used by Julia when constructing arrays over "parents" [1] like R which aren't implemented as types but as objects, e.g. arrays over R = IntegersMod(7).

I mention this only because it directly relates to the call syntax and calling of objects as though they were types.

In fact, in computer algebra, the distinction becomes blurred. Consider ideals A and B of a ring R. We can certainly perform arithmetic operations on A and B such as A*B. So in one sense they behave like objects or values. But we can also think of ideals as sets of elements in the same way that ZZ or a polynomial ring is. So they can also act as "parents". For example, the element 0 of an ideal could be obtained by zero(A) or A().

Of course we can define zero(A) and A() no problems. But this isn't the way the Julia generic matrix algorithms generate a zero object. They will use zero(typeof(A)) which is not sufficient to construct the zero object of "type" (actually parent) A.

As I say, I'm just mentioning it here since it seems like an opportune moment to do so. I'm not expecting anything, just hoping that it might be of use in future planning.

[1] "parents" is the name computer algebraists give to the objects that represent mathematical types such as polynomial rings or Z/nZ that can't be fully modeled using actual types in the programming language of choice.

Is there a workaround for this? I was so happy to see this happening in 0.4 and now its gone again. In Gtk.jl this would have allowed to remove a lot of macros that are currently used to create widgets. https://github.com/JuliaLang/Gtk.jl/issues/134

Gtk doesn't need this. In particular, the limitation is that you can't call an instance of a widget, but you can still define constructors for abstract subclasses.

Many things not yet done in Julia need to add methods to an abstract type.

If this capability were to become absent, fundamental expressiveness would be curtailed.
It is already too hard to gather together multifaceted realization as an abstraction, and to project from intrinsically coherent concept through abstraction into specific entityhoods.

Moreover, it is a substantial boon to clear human communication about Julia code.

@JeffBezanson @wbhart @vtjnash
For example, it has been an essential strength of Julia that one could think and code:

The norm of a number that belongs to a division algebra is the square root of the product of that number with the conjugate of that number.

import Base: conj, *, sqrt

abstract DivisionAlgebraNum <: Number
norm{T<:DivisionAlgebraNum}(x::T) = sqrt( x * conj(x) )

and then define specific types, knowing norm(x) does 'just work'.

immutable QuaternionNum{T<:Real} <: DivisionAlgebraNumber
   r::T;   i::T;   j::T; k::T
end
conj{T<:Real}(x::QuaternionNum{T}) = ...
*{T<:Real}(x::QuaternionNum{T}, y::QuaternionNum{T}) = ...
sqrt{T<:Real}(x::QuaternionNum{T}) = ...
...

immutable OctonionNum{T<:Real} <: DivisionAlgebraNumber ...
immutable ComplexNum{T<:Real} <: DivisionAlgebraNumber ...
immutable RealNum{T<:Real} <: DivisionAlgebraNumber ...

@JeffreySarnoff There's no evidence ordinary generic functions like your norm function will stop working is there?

Yes, definitions like that still work fine. The change only affects what used to be the first argument of call.

So does this mean that I would not run into this using abstract types and generics on them to code some category theory as an abstraction and then use that to define some basic categories as working types?

I suspect you would definitely hit it if you tried to define functors using (the replacement of) the call syntax. This could happen if your functors were treated as objects that had properties, which as you know happens. This is actually a very good example.

One other area where this functionality is really needed is homological algebra. I'd be surprised if it wasn't also extremely valuable to homotopy theory and to group actions in representation theory.

I've changed the title of the issue as it may have been confusing people

Just to clarify, my concerns are not theoretical, but practical. Our code broke because of the syntax change, but some of the call overloads we had are no longer possible at all. We have lost actual features because of this change.

Can you post the definitions that no longer work?

The first one we noticed was:

call{T <: RingElem}(f::PolyElem{T}, a) = subst(f, a)

This is for evaluating polynomials at arbitrary things, e.g. elements of other rings or at matrices, etc.

Note PolyElem is an abstract type acting as a type class for all polynomial types in Nemo.

There are various specialisations of this where "a" is a specific type. I'll omit these since they are of the same kind.

(There will be similar things for PowerSeriesElem instead of PolyElem.)

Here are some specific examples from Hecke.jl:

call(O::GenNfOrd, a::nf_elem, check::Bool = true)

This function is for coercing a number field element into the ambient number field of an order.

There's about a dozen similar examples for coercing various things into such. This is done because there are multiple different number field order types.

There is also:

call(M::Map, a::Any)

This is for applying any kind of map in Hecke to anything. Map is an abstract type to which many types belong. Our map objects contain data about the maps.

The guy writing Singular.jl says he has a problem with this change too, but I don't have explicit examples from his code right now. He basically said he used this generic call thing to reduce code duplication. He's modelling a system written in C that uses dynamic types, and instead of duplicating everything for each individual type, he has a generic implementation that he specialises per type.

My biggest concern actually is the project I was about to work on which was an object model for group theory and commutative algebra. I produced a very small prototype some months back which relies heavily on call. I just had a look now and miraculously the prototype doesn't yet make use of abstract types for the first argument. But there's no doubt it will. And this is very worrisome as I just convinced some colleagues to use Julia for this precisely on this basis. I'm visiting them next week as it happens to thrash out the details.

Consider homomorphisms between many different types of groups. The homomorphisms are modeled as objects that can be called. All the homomorphism types will belong to an abstract type, and a fundamental part of the mechanism that allows propagation of new knowledge along the homomorphisms requires generic overloading of call for all homomorphisms (similar to the Map example above).

Hmm, so, what was it?

Sorry, I accidentally pressed enter whilst writing the post. If you look at the post on GitHub itself you will see I've filled in what I wanted to write now.

The post is empty.

Well I see it, but here it is again:

The first one we noticed was:

call{T <: RingElem}(f::PolyElem{T}, a) = subst(f, a)

This is for evaluating polynomials at arbitrary things, e.g. elements of other rings or at matrices, etc.

Note PolyElem is an abstract type acting as a type class for all polynomial types in Nemo.

There are various specialisations of this where "a" is a specific type. I'll omit these since they are of the same kind.

(There will be similar things for PowerSeriesElem instead of PolyElem.)

Here are some specific examples from Hecke.jl:

call(O::GenNfOrd, a::nf_elem, check::Bool = true)

This function is for coercing a number field element into the ambient number field of an order.

There's about a dozen similar examples for coercing various things into such. This is done because there are multiple different number field order types.

There is also:

call(M::Map, a::Any)

This is for applying any kind of map in Hecke to anything. Map is an abstract type to which many types belong. Our map objects contain data about the maps.

The guy writing Singular.jl says he has a problem with this change too, but I don't have explicit examples from his code right now. He basically said he used this generic call thing to reduce code duplication. He's modelling a system written in C that uses dynamic types, and instead of duplicating everything for each individual type, he has a generic implementation that he specialises per type.

My biggest concern actually is the project I was about to work on which was an object model for group theory and commutative algebra. I produced a very small prototype some months back which relies heavily on call. I just had a look now and miraculously the prototype doesn't yet make use of abstract types for the first argument. But there's no doubt it will. And this is very worrisome as I just convinced some colleagues to use Julia for this precisely on this basis. I'm visiting them next week as it happens to thrash out the details.

Consider homomorphisms between many different types of groups. The homomorphisms are modeled as objects that can be called. All the homomorphism types will belong to an abstract type, and a fundamental part of the mechanism that allows propagation of new knowledge along the homomorphisms requires generic overloading of call for all homomorphisms (similar to the Map example above).

I guess as there is no equivalent of call(x::Integer, args...) there is also no equivalent of call{T <: Integer}(x::T, args...)?

I see that this can be overcome by defining call(x::T, args...) for every concrete subtype of T. But what if there are infintely many of them you don't know at parse time? More concretely I am thinking of a "recursive" type as follows:

type A{T <: Integer} <: Integer
  x::T

Then you can construct A{Int}, A{A{...{A{Int}}} but with the new behavior of call I don't know how to overload an object of this type. Before the change I could just do

call{T <: Integer}(a::A{T}, args...) = ...

One solution now would be

call(a, args...) = ...

But this cannot be used as soon as there is another type with the same behavior.

Note that this is not an artificial problem: Think of polynomials f whose coefficients are polynomials whose coefficients are polynomials whose coefficients are polynomials whose coefficients are of type Int. And now I would like to evalute this polynomial using the using the syntax f(a).

I hope I could properly communicate my concerns.

To be clear: the only restriction here is with defining call for all objects that belong to an _abstract_ supertype with just one definition. You are still able to define call for non-concrete parametric types.

These are ok (I'll use the old 0.4 syntax for clarity):

abstract AbstractFoo
type Bar <: AbstractFoo end
type Baz{T} <: AbstractFoo end

call(::Bar,x) = 1
call{T}(::Baz{T},x) = 2
call{T<:Integer}(::Baz{T},x) = 3
call(::Type{AbstractFoo},x) = 4
call{T<:Union{Bar,Baz}}(::Type{T},x) = 5

These are the cases that are not supported:

call(::AbstractFoo, x)
call(::Union{Bar, Baz}, x)
call{T<:AbstractFoo}(::T, x)
call(a, x)

So your A{T} example is just fine, @thofma.

Thanks for the clarification @mbauman. Very helpful.

I guess one of my problems was that I could not find out how to do this with the new synatx. The following gives me a syntax error:

{T<:Integer}(::Baz{T},x) = 3

As a user it is quite unfortunate to have all cool features of parametric types except for this one. In particular, since it was possible in 0.4. While I don't mind the change of syntax with new versions, but the removing of features from the language (and not providing equivalent functionality) is hard to cope with.

It does feel a little backwards right now, but you can do it:

(::Baz{T}){T<:Integer}(x) = 4

The way that I see it, the fundamental problem here is that, given a large number of methods definitions, a data structure is needed to find the most specific match to dispatch a given function call.

Method tables stored per type of the first (implicit) argument are one implementation of such a data structure, but it's not the only possibility. (E.g. in PatternDispatch.jl I built a graph based on the partial order, but I'm not thinking of such major rewrites here. )

How bad would it be to visit the method tables of both a type and it's super types when looking for the most specific match for dispatch?

Yes, we could do that. It seemed kind of ugly to me but I might just have to deal with it.

A work around is to define a macro for extending an abstract type that defines the necessary call:

abstract AbstractFoo
macro subFoo(F)
return quote
immutable $F <: AbstractFoo end
(::$F)(x)=x
end
end

@subFoo(Foo)

f=Foo()
f(5) # returns 5

Not sure how to include fields in the subtype, however.

On 7 Feb 2016, at 8:07 AM, Jeff Bezanson [email protected] wrote:

Yes, we could do that. It seemed kind of ugly to me but I might just have to deal with it.

—
Reply to this email directly or view it on GitHub https://github.com/JuliaLang/julia/issues/14919#issuecomment-180863860.

Thank you to all who are devoting some of their focus on this.

This issue is relevant to some design effort underway. Is there a decision to continue supporting this use of Abstract?

@vtjnash sad to see this "won't fix" tag.
This bug affects my package, and I regret that call is not going to be a first-class function.
Yet I appreciate the massive overhaul of #13412. If the issue turns out to be a "won't fix" indeed, then we need to embrace it and document it.

I admit, I don't love losing this functionality. We could certainly try the approach here https://github.com/JuliaLang/julia/issues/14919#issuecomment-180863659

Agreed. I put the "won't fix" flag mostly as a sign that this may not be fixed soon

IMHO this is a blocker for 0.5. Else some packages will be forced to keep using the old call overloading syntax.

That's not what "won't fix" means.

toivoh's approach: to visit the method tables of both a type and it's super types when looking for the most specific match for dispatch

Julia wakes up early to do the dance of dispatch resolution, letting some of us sleep in a bit before running dispatch resolved methods -- right? for most user's uses and also for the internal stuff (sufficiently advancing on Arthur C. Clarke's indistinguishability).

We may be onto more good by looking up dispatches' types by looking up types' types:
a refinement of this mechanism likely contributes to Julia's expressive ease by bringing traitful entrainings, stackable protocols, inheritable multimeldsâ„¢ or other waves of hello.

I think I just encountered this issue in CurveFit.jl:

abstract LeastSquares

#define a bunch of concrete types <: LeastSquares

Base.call{T<:LeastSquares}(f::T, x) = apply_fit(f, x) #gates of hell

now produces

WARNING: deprecated syntax "call(x::T, ...)".
Use "(x::T)(...)" instead.
ERROR: function type in method definition is not a type
 in eval(::Module, ::Any) at ./boot.jl:225
 in macro expansion at ./REPL.jl:92 [inlined]
 in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46

TypeMap is pretty close to being able to handle this:

julia> typeof(+).name.mt |> length
20305 # there is only one method table
julia> methods(+) |> length
165

julia> abstract Foo
julia> type Bar <: Foo; end
julia> (::Foo)(x) = x
julia> Bar()("it's alive!")
"it's alive!"

which is fun, even though it still has a few serious performance bugs (e.g. core test takes 174.07s instead of 20s)

It would be cool if eventually this functionality was restored. In a code with use case similar to ApproxFun, I've used the simple workaround of adding a method to every subtype: that involved changing a single line of code into 20+ lines spread over 15 files in two packages - I would love to take those out again :-)

On the other hand, some new helpful warnings in 0.5 pointed to lots of redundant code so we still had a net gain. Deprecation warnings were quickly dealt with and this issue was the only one in moving to 0.5, so :thumbsup:

On second thought, never mind my previous comment: our use of the calling operator was not semantically consistent, and in the alternative we won't have this problem of adding functions to abstract types anymore. Even better.

I want to mention another serious issue that has arisen from removal of this functionality. I recently went through our package Nemo to document all our functionality. We now take the approach that as much as possible in Nemo is implemented generically (I am using this word in the sense of generic programming, not in the sense of generic function in Julia), for generic rings/fields/groups/modules, etc. and then overloaded by more specific methods where available.

Documenting the same functions over and over again for each individual ring was proving to be cumbersome and not useful to the user. So we decided to document the most generic implementation exactly once and say that it works for all types that belong to the abstract type for which the function is implemented (this fits in nicely with the Documenter.jl package we are using, and the Julia doc""" ... """ syntax).

The problem is, now there is no "generic" version of "call" for our abstract types. So there's no "generic" function to actually document. And moreover, we can't tell the user that this functionality is automatically available for them if they implement a type that belongs to one of our abstract types.

To work around this, we've basically had to add some fake documentation directly in our .md files for call (or its replacement), along with a longwinded explanation of why this symmetry in Julia is broken.

This, oddly, seems to work:

abstract A

"Some doc"
function (::A) end
help?> A
search: A ANY Any any all abs ARGS ans atan asin asec any! all! airy acsc acot acos abs2 Array atanh atand atan2 asinh asind asech asecd ascii angle

  Some doc

So, an empty generic function for an abstract type can be created but it's not possible to add methods to it.

Attaching the docstring to A is expected there. I felt that allowing call, or (::?), to have additional documentation added to it wasn't really that useful.

What I mean here is that call syntax is for calling an object and it's documentation should reflect that. Adding dozens of docstrings describing how it works for each callable object isn't going to scale too well I believe. In my view specifics about how a type can be used should always be added to the type's docstring, i.e. supports calling, iteration, etc. Docstrings for "generic" functions should stay generic.

I believe that structuring docs in that way will be more discoverable than having them spread out over a multitude of different functions.

(Apologies for going slightly off-topic here.)

+1,2,many for restoring this underlying capability to abstract types:

defining call for all objects that belong to an abstract supertype with just one definition - mbauman

This is essential to designing software in the abstract that will run correctly when coded as conceived.

If that is not sufficiently compelling, this technology would give Julia an immediate way to specify API's.

Should this get a milestone?

I also just stumbled over this issue. Should I assume that this won't be restored for v0.5?

@dlfivefifty as a temporary work-around, instead of a macro to create the subtype I created a macro to annotate a new type declaration; for me this is sufficient for now, maybe it helps?

macro pot(fsig::Expr)
   @assert fsig.head == :type
   eval(fig)    # creates the new type
   eval(
   quote
      (x::$(fsig.args[2]))(args...) = dosomething(x, args...)
   end
   )
end

Correct versions of above macro:

function pot(fsig::Expr)
   @assert fsig.head == :type
   eval(fig)    # creates the new type
   @eval begin
       (x::$(fsig.args[2]))(args...) = dosomething(x, args...)
   end
end
macro pot(fsig)
    return :(pot($(esc(fsig))))
end

or

macro pot(fsig)
   @assert fsig.head == :type
   return quote
       $(esc(fsig))   # creates the new type
       (x::$(esc(fsig.args[2])))(args...) = dosomething(x, args...)
   end
end

@vtjnash thank you - I only tested mine in a notebook mini-test. What would have gone wrong?

If this feature gets implemented in a non-breaking way I guess it could be a backport candidate (as long as packages that rely on it are careful to mark the minimum patch version of Julia in their REQUIRE file)? Jeff, how likely would that be?

Let's mark it as 0.5.x for now; if it's unlikely, we can bump it to 0.6

I wouldn't expect this to be non-breaking if implemented

You never know :D

Just noting that this change also causes me problems. Looks like I can work around it, but this should "just work".

Since this would be breaking to fix (because it adds a feature), moving this to the v0.6 target.

if it's considered a regression from 0.4, then it would be fixing that. people would need to be careful about minimum version dependencies, and the fix would have to be non disruptive.

I think the current state is breaking while resolving this issue would keep compatibility

It is really too bad that this has moved to 0.6; it has really started to come back and bite me a few times since I first run into it.

I share that sentiment. Could we prioritize this within the stuff that is to be in v0.6 without detriment?

Do we yet know the syntax? I would rather move forward designing code with this facility and live with the fact that cannot be run and tested until tomorrow than require new stuff run in v0.5.

This is unlikely to be fixed in 0.6.

@StefanKarpinski What are the two largest things in-the-way?

But hopefully will be for 1.0...

I agree with @dpsanders. This restriction is weird and feels artificial, which is fair enough in an 0.6 product, but could be damaging to Julia's reputation in a 1.0 product

I also just came across this and think it would be great if at least one of these cases could work (errors below):

abstract Foo <: Function
type Bar <: Foo
  var
end
# these 3 cases fail for different reasons
(mt::Foo)(x) = mt.var+x
(mt::T){T <: Foo}(x) = mt.var+x
# naively tried an unused syntax, which I would suggest as a syntax solution?
{T <: Foo}(mt::T)(x::Number) = mt.var+x

# but the more basic cases work, and are equal in this case
(mt::Bar){T <: Number}(x::T) = mt.var+x
(mt::Bar)(x::Number) = mt.var+x

# executing
bar = Bar(1.0)
@show bar(2.0)

This feels very close to polymorphism in Object Oriented, and should be possible. Problem is template syntax is not unique, since T could be <: Function or <: Number in the two cases above. However there is a bit of necessary redundancy in both basic cases.

cannot add methods to an abstract type
function type in method definition is not a type
syntax: invalid function name {T<:Foo}(mt:T)

# execution call is unique
MethodError: no method matching (::Bar)(::Float64)
# works from trivial case
bar(2.0) = 3.0

Related to @jiahao 's comment.
cc @StefanKarpinski

(::Function)(::Any...) = "I'm paving the road to hell."

Not sure I understand what you mean... I think the question is if type inference should be able to find abstract / templated functors. And as I understand, this used to be supported in 0.4. Conceptually from user perspective its not that different from the existing syntax for lambdas. For example this works:

(x...) -> @show x

I think the biggest design complication will be deciding on how to handle shared memory threads, but that is somewhat a separate issue. Dispatch and separate processes should all be accessing different memory locations, so doesn't seem a concern.

If the above worked, then

julia> sin(1,2)
"I'm paving the road to hell."

I'm not saying this is necessarily bad, but it's certainly rather dangerous.

(::Foo)(::Any...) is clearly different and won't result in arbitrary calls. Plus this used to be supported...

I've come across this while attempting to write a (small) computer algebra system to perform certain quantum mechanics calculations. Is this ever likely to be fixed, or should I perhaps consider rewriting in Python or similar?

How is that the choice? I want this feature or I'm going to use a different language? This may happen at some point, but it's a fairly niche feature and not a high priority at the moment.

The design I am looking to implement relies on inheriting call behavior from a supertype. In Python, to use my earlier example, one can override __call__ in an abstract class, and subclasses will inherit that behavior. It seems that, while this bug is outstanding, one can't do the same thing in Julia in an "easy" way (although it can probably be worked around with macros).

I understand of course that there is a need to prioritize features, and moreover that the developers are under no obligation to implement anything in particular just because a user wants it. I apologize if my earlier message came off as a demand.

I have to admit I also think of this as a bug rather than feature and I am surprised there aren't more Julia users complaining. But FWIW it is actually very easy to work around it with a macro.

@cortner It seems to be easy to work around with a macro in the case where only a single such type hierarchy exists in the program -- indeed there's a macro doing exactly that earlier in this thread. If, however, I want several different abstract types, each of which has an associated "call behavior", I can't see a way to implement the desired behavior without some kind of lookup table. Any insights you may have into this problem would be much appreciated, because I am stumped!

EDIT: tested, it seems ok. One needs to work a bit more if there are type parameters. I can post this here if useful.

t_info(ex::Symbol) = (ex, tuple())
t_info(ex::Expr) = ex.head == :(<:) ? t_info(ex.args[1]) : (ex, ex.args[2:end])

macro ev(fsig)
   @assert fsig.head == :type
   tname, tparams = t_info(fsig.args[2])
   sym = esc(:x)
   return quote
       $(esc(fsig))      # creates the new type
       ($sym::$tname)(args...) = mycall($sym, args...)
   end
end

abstract A 
abstract B

@ev type AA <: A 
end 

@ev type BB <: B
end 

mycall(a::A) = "I am an A"
mycall(b::B) = "I am a B"

and then

julia> a = AA()
AA()

julia> a()
"I am an A"

julia> b = BB()
BB()

julia> b()
"I am a B"

P.S.: I use this construction a lot. Painful because I really don't like macros (took me ages + asking for help a lot), but it works for now.

Thank you very much for the assistance, @cortner. I didn't think of using a generic function inside the macro -- I think that's quite clever.

If you could also post the version with type parameters it would be much appreciated.

Thanks!

Personally I think it's a mistake to wait until 1.x to fix this bug: while it's not necessarily a crucial bug to fix, it's a confusing restriction that when hit, makes Julia feel "weird"

+1
Need this feature to work for beauty and ease of coding (in my case).

I haven't been able to find much discussion on this issue either here or on the forums since 1.0 came out. Are there any "new" workarounds for this with 1.0? (while awaiting a fix in 1.x it seems)

Bump. Just run into this.

Just got this error in Julia 1.1. Calling abstract types seems like a nice feature to have. Since there have been no comments here for a few months, what is the status of this issue? Will there be a fix eventually or will this remain an error in future versions?

This is technically difficult and not a high priority, so I wouldn't hold your breath. Might happen eventually when there are no more important things on Jeff's plate or someone else takes it on.

Could you share any plan (or hints) for how to implement this feature so that people who wish to have this feature may take some attempts?

(I was surprised to find this error for the first time and thought it should be relatively easy to support this feature. After struggling for a while, I had to admit that there was no easy approach.)

@JeffBezanson may be able to provide some guidance.

This is certainly possible to implement. However, it is necessarily deeply tied to internals of how dispatch works, which is quite complex for performance reasons. So it may not be practical to attempt unless you really want to roll up your sleeves.

Currently (in what can perhaps be seen as a "premature optimization") the dispatch table is split based on the concrete type of the called object. Each TypeName has a mt::MethodTable field with the methods for just that type family (e.g. (f::F{T})() for any T). So a method lookup always starts with typeof(f).name.mt. Instead, we would need a single global MethodTable, and simply add all methods to it and do all lookups in it. I think that can be implemented fairly easily.

The main complication is that that is likely to cause some significant performance regressions in dynamic dispatch (see e.g. #21760 and linked issues), and also in method insertion (which is often the bottleneck in loading packages) and invalidation (the backedges array in MethodTable). Getting back this performance might be difficult, especially since it's not fast enough as it is.

for some purposes, this may suffice

abstract type Abstract end
isabstract(x) = false
isabstract(x::T) where {T<:Abstract} = true

abstract type Abstraction <: Abstract end
isabstraction(x) = false
isabstraction(x::T) where {T<:Abstraction} = true

struct Concrete <: Abstract
    value::String
end
concrete = Concrete("this is abstract")

struct Concretion <: Abstraction
    value::String
end
concretion = Concretion("this is abstraction")

(isabstract(concrete), isabstract(concretion)) == (true, true)
(isabstraction(concrete), isabstraction(concretion)) == (false, true)

@JeffBezanson is a global method table not akin the old "ball of call"?

Whoa! So excited to see this fixed! Thank you! :)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

TotalVerb picture TotalVerb  Â·  3Comments

StefanKarpinski picture StefanKarpinski  Â·  3Comments

StefanKarpinski picture StefanKarpinski  Â·  3Comments

tkoolen picture tkoolen  Â·  3Comments

dpsanders picture dpsanders  Â·  3Comments