@jakebolewski, @jiahao and I have been thinking through some type system improvements and things are starting to materialize. I believe this will consist of:
DataType
to implement tuple types, which will allow for more efficient tuples, and make tuple types less of a special case in various places. This might also open the door for user-defined covariant types in the future. (DONE)UnionAll
types around all uses of TypeVars. Each UnionAll
will bind one TypeVar, and we will nest them to allow for triangular dispatch. So far we've experimented with just sticking TypeVars wherever, and it has not really worked. We don't have a great syntax for this yet, but for now will probably use @UnionAll T<:Int Array{T}
. These are mostly equivalent to existential types.Complex
will be bound to @UnionAll T<:Real _Complex{T}
where _Complex
is an internal type constructor.(Int,String)
could be a subtype of @UnionAll T (T,T)
if T
were Union(Int,String)
, or Any
. However dispatch doesn't work this way because it's not very useful. We effectively have a rule that if a method parameter T
only appears in covariant position, it ranges over only concrete types. We should apply this rule explicitly by marking TypeVars as appropriate. So far this is the best way I can think of to model this behavior, but suggestions are sought.@UnionAll
should suffice.This is being prototyped in examples/juliatypes.jl
. I hope this will fix most of the following issues:
method lookup and sorting issues:
method ambiguity issues:
misc. type system:
In particular, type intersection needs to be trustworthy enough that we can remove the ambiguity warning and generate a runtime error instead.
+100
and another +100 for the 0.4 milestone.
0.4 will be so great
This will be one heck of an improvement. I'm not sure I fully appreciate your intentions with ForAll
, but I'm already eagerly anticipating it.
Currently the scoping of the variable in typevars is both vague and limited. By explicitly wrapping them in a universal quantifier – i.e. ForAll
– you can address both: the scope becomes explicit and one can therefore do much more sophisticated things.
This sounds great, but can you elaborate on the part
It turns out that we effectively have a rule that if a method parameter T only appears in covariant position, it is made invariant instead.
When does it apply? Which positions are considered covariant and which invariant? (Are there any contravariant positions?) Is that different in any way from making type parameters always invariant?
@toivoh the idea was to describe how diagonal dispatch currently works.
Consider:
f{T}(x::T, y::T) = x + y#matches a concrete type like T = Float64 but not an abstract type like T = Real
f{T}(A::Matrix{T}, b::T) = A .+ b #matches both T = Float64 and T = Real in 0.4-dev but only T = Float64 in 0.3
The first method has input signature T. (T, T)
but unlike ordinary tuples, only allows for (Float64, Float64)
, not (Real, Real)
even though the former is a subtype of the latter. Therefore tuples in this context do not behave covariantly, but invariantly.
f(3.0, 4.0) #arguments of type (Float64, Float64)
f(3, 4.0) #arguments of type (Int, Float64) <: (Real, Real) but does not match
The second has signature T. (Matrix{T}, T)
and is matched invariantly in 0.3, but covariantly in 0.4.
f(zeros(Real, 3, 3), 4.0) #arguments of type (Matrix{Real}, Float64) <: (Matrix{Real}, Real)
The general rules appear to be:
(T, T, T, T...)
), in which case, match the tuple invariantly.The v0.4 rule is admittedly strange when written out like that.
I don't think it's the whole tuple that's matched invariantly, just the typevar. For example f{T}(x::Real, y::T, z::T)
still allows any Real as the first argument.
The new covariant behavior of ::T
in e.g. f{T}(A::Matrix{T}, b::T)
was news to me, very much appreciated! (I do follow most issues though, has this been discussed anywhere?) As I understand it, before this, type parameters were always considered invariant.
I agree that it's useful to be able to match e.g. f{T}(x::T, y::T)
invariantly (or equivalently, force T
to be concrete there), but I'm not sure that it's not useful to be able to match it covariantly.
I understand the rule from the point of view that when trying to match a signature against an actual argument tuple, each type parameter can always be bound to a type derived from one of the argument types, and then we check if the other constraints are fulfilled. I also see that this rule would minimize breakage while letting ::T
be covariant in cases like f{T}(A::Matrix{T}, b::T)
.
Still, I'm not quite comfortable with the special casing. It's easier that you unwittingly change a case from covariant to invariant or vice versa, and if you haven't read the manual very carefully, you might not even be aware of the distinction. Could we consider having ::
be always covariant even on type parameters, and have another way to get the invariant behavior, like e.g. on of
f{T}(x:::T, y:::T) = x + y
f{T}(x::=T, y::=T) = x + y
This would make it immediately clear to the reader that something different is going on.
The :::
form would be in some kind of analogy with ===
, which is stricter than ==
.
Or maybe the type parameter T
itself could be marked as invariant in some way.
I have to admit that I'm not entirely comfortable with the special casing either, although I realize we've been doing something similar for a rather long time. Not sure how else to handle it, although I would mention that if you can dispatch on type paremeters being concrete, then covariance would be a good choice.
@JeffBezanson @jakebolewski and I have had a followup discussion and we feel that ForAll
s is not the right use of ∀
in the sense of universally bounded quantifiers in type theory. Perhaps UnionAll
is a better name.
Thinking about this some more, I realize that it is mostly an issue of consistency rather than functionality. Looking at the prototypical case where a type parameter appears only in covariant position,
f{T}(x::T, y::T) = T
the covariant behavior can be achieved anyway with one of
f{S,T}(x::S, y::T) = Union(S,T)
f{S,T}(x::S, y::T) = typejoin(S,T)
depending on how you want the matching of T
to behave. That alone might be reason enough to forbid a type parameter to act covariantly everywhere it is used. It would require to specify a special case behavior, and it feels better require the user to choose explicitly instead. So if we should do this for consistency, I would probably vote to make f{T}(x::T, y::T)
an error and require that the user specify that T
behave invariantly or use one of the patterns based on f{S,T}
above.
We would of course still permit type parameters appearing only once covariantly such as in f{T}(x::T) = T
, where both the covariant and invariant behavior would be to set T
to the type of x
.
When it comes to functionality, the only thing it seems that we stand to lose with the rule is the possibility to let e.g. T
work in an invariant manner in f{T}(::Array{T}, ::T)
. Not sure how big that loss that is.
One possibility for marking a type parameter as concrete might be to allow something like
f{T::Concrete}(x::T, y::T) = T
I have no idea how much havoc it would wreak in the type system to have all concrete types and no others be instances of Concrete
.
It appears that allowing ::
relations on type parameters would widen the domain of dispatch even further, allowing to dispatch on the types of type parameters and not just on the types of arguments (meta-dispatch?) I can't say if this is a really bad idea or maybe even useful in other cases than T::Concrete
. Of course, marking up a type parameter as concrete could be done with some new special syntax without opening up this Pandora's Box.
One thing to note is that f{T<:Concrete}(x::T, y::T)
will not serve to mark T
as concrete. I guess it has to be an axiom that if A <: T
and B <: T
then Union(A, B) <: T
, so that would allow T = Union(typeof(x), typeof(y))
above. A union is not a concrete type (right?) in the sense that typeof(x)
can never return a union type, so apparently concreteness can not be described by a subtyping relationship. But perhaps by an instance-of relationship.
It's not just a question of whether T
is covariant or invariant, it's also a question of how T
is determined at the call site, i.e. how the pattern matching works. In the case of f{T}(::Matrix{T}, ::T})
, T
may be forced to be an abstract type by the first argument. In the case of f{T}(::T, ::T)
, there is nothing that would immediately force T
to be abstract.
Instead of arguing via covariance or invariance, I would describe Julia's current type matching rule as "match the (implicitly concrete) argument types against the types in the function signature, and deduce type parameters from that". Nothing in type matching is currently looking for supertypes (or unions) of two types to make things match.
Having said this, which is obviously not news to you: @JeffBezanson, are you looking for a way to introduce such a behaviour (automatically looking for supertypes or unions)? If so, what about allowing type arithmetic in function signatures? Or maybe it would be better to extend type pattern descriptors?
What would be the advantage of deducing supertypes or unions? Would it be to prevent specialization of routines? If so, couldn't that be expressed differently, with metadata instead of through the type system, or by analyzing the function body to see whether specialization is beneficial?
The question is whether a tuple of actual argument types is a subtype of the method signature, so this does reduce to a question of covariance vs. invariance (the argument tuple is covariant).
There wouldn't be any advantage to deducing supertypes or unions; we don't want that. However theory requires them, because (Int,String)
is indeed a subtype of forall T . (T,T)
if you let T
be Union(Int,String)
. So we need an explanation for why we don't do that. If we did this outside the type system, you wouldn't be able to write down a type for the method that lets you reason about how it can be called, which I think would be unfortunate.
Thanks for the rationale.
I agree with @toivoh: It would then be necessary to express the notion "T
is a concrete type" in the type system, since type matching starts from actual arguments, which are concrete types.
Hmm, yes, this may not be invariance exactly, since the intersection of forall T . (T,T)
and (Real,Number)
should be forall T<:Real . (T,T)
, while the intersection of forall T . Foo{T,T}
and Foo{Real,Number}
is empty.
With covariance, the intersection of forall T . (T,T)
and (Real,Number)
would be (Number,Number)
.
I'm not sure that I understand the example. For any types R
and S
, surely their intersection T
must satisfy T <: R && T <: S
? So I don't see how the intersection of (Real,Number)
with anything could be (Number,Number)
? With covariance we should have (forall T . (T,T)) == (Any,Any)
(right?), so I would expect the intersection with (Real,Number)
to be (Real,Number)
.
I'm not sure how type intersections relate to the problem at hand. As I understand it, what we are discussing is
Ta <: S
, where S
is the type signature and Ta
is the type of the argument tuple. Since Ta
is concrete, this should be equivalent to S
and Ta
having a nonempty intersection.T
that gives a match, which is an optimization problem with type inequality constraints (though straightforward to solve, it seems). I'm not sure that the answer can be formulated based on a single intersection operation.I guess that the tuple (Real,Number)
in the example was meant to stand for the type of the actual arguments? That's confusing because it's not a concrete type.
Yes, I should have said (Real,Number)
.
Type intersection relates because it has to be consistent with how method calls behave. Things will break if method calls add in extra behaviors that the rest of the type system doesn't know about.
The question is how to conclude !( (Int8,Int16) <: (forall T . (T,T)) )
. Currently, we must be implicitly using something like a constraint that T
be a concrete type, but I'm not sure how to formalize this.
One way to think about this, which is what kind of makes them intuitive is that you can work backwards very concretely from actual argument types in a straightforward way – if T
is a type parameter and you have x::T
then you can immediately take T = typeof(x)
. Similarly if you have a::Array{T}
then you can take T = typeof(a).parameters[1]
. This is actually rather different than quantification, which is part of the issue here.
That's true, this is not really quantification. For one thing, subtyping
with bounded quantification is undecidable, and I believe we are using a
decidable variant. The forall is really a big set union; see jiahao's
comment about calling it unionall instead. However by itself this doesn't
yet solve the method parameter issue.
About the case with a type parameter T
that is only in covariant position:
From the matching point of view (choosing which method to dispatch to) it should be sufficient to set all such parameters to their upper bounds. So the only question left about handling such a case would be how to pick T
once the method has been decided. One way would be to say that T
is set to the most general type that works, e.g. its upper bound. Then we don't need to form a common supertype or union. Or it could be made an error.
I think that we should make the constraint that T
be concrete explicit, i.e. let the user write it out. I don't think there is any reasonable interpretation that will make taking T=typeof(x)
from x::T
fall out as a consistent special case. (The question is of course how much the inconsistency hurts.)
Btw, regarding the possibility for f{T::Concrete}(::T, ::T)
: this is not the first time when people have wanted to be able to specify the type of a type parameter. Consider MyType{N::Int}
for instance, this would also be useful at least for other type parameters that are not types.
I don't think that the user cares whether a type T
is concrete or not. The only case where this would (currently) matter is e.g. in the type of containers, such as f{T::Concrete}(::Vector{T})
. I haven't seen people ask for this.
Maybe one should treat matching and compiling as two separate steps. During matching it does not matter whether a type is concrete; as @toivoh says, the most generic matching type signature could be used.
To compiler, the types are then specialized. This is a heuristic, and exactly how a routine is specialized does not influence correctness. In principle, the compiler could even specialize on the (inferred) type of local variables, i.e. introduce branches to keep values unboxed.
During matching it does not matter whether a type is concrete
Doesn't it, though? We want f{T}(x::T, y::T)
to be applicable to (Int,Int)
but not to (Int,Int8)
, so we don't want T==Union(Int,Int8)
.
I stand corrected.
Ref: discussion about negations in 23b6a1fff83f4fb93f6b36cf5fa109ad0a723711
Some thoughts on direct field access (#1974 #2614 #7561):
From a practical perspective, languages differ on how they restrict field access:
private
and protected
in C++ classes._foo
.__bar
to __classname_bar
(ref: Python 2 tutorial, Sec. 9.6).In Julia we have no true privacy, but afaict we don't have explicit naming conventions or mangling mechanisms either.
From a theoretical perspective, the fact that all fields are semantically public has consequences for the type system:
Julia uses nominative typing, where subtyping relations are explicitly defined. But there are cases where using nominal typing is problematic, causing users to use duck typing instead. Two examples I have in mind:
QRCompactWY
and the discussion in #987 about whether it ought to be an AbstractArray
.Is it a coincidence that many of the problematic cases encountered involve AbstractArray
in some way?
I thought a bit more about structural vs. nominative typing:
jl
type Point2D; x::Int; y::Int; end
type Point3D; x::Int; y::Int; z::Int; end
which by rules of structural typing would define the relation Point2D <: Point3D
. One would also get Point2D <: Point4D
and so on if one defines the related types
jl
type Point4D; x::Int; y::Int; z::Int; w::Int; end
In this example, structural subtyping seems odd; furthermore, it looks like structural subtyping automatically requires the possibility of multiple inheritance.
ColorValue <: AlphaColorValue
by defining the alpha
blending field and avoiding the need to define an AbstractAlphaColorValue
that is the supertype of both. With structural typing one automatically gets RGB <: RGBA
, and so on for all color value types.The difference between these uses cases seems to be that for the latter, the only sensible embedding of ColorValue
into AlphaColorValue
is with alpha = 1.0
, whereas Point2D <: Point3D
only really makes sense if the 2D plane containing Point2D
s lies on some x-y slice of 3D space, i.e. with some fixed value of z, say z=0
. This is artificially restrictive since there is no unique "default" value of z that Point2D
s should inherit, and furthermore setting z = constant
is far from the only way to define a 2D subspace of a 3D space.
One could imagine structural subtyping sans multiple inheritance, but then it would limit the utility of subtyping in the Color example if one wanted to also support other color composition formulas other than alpha blending, since ColorValue <: AlphaColorValue
would prevent the further definition of ColorValue <: ColorValueWithOtherBlendingFormula
.
I think you have that backwards; Point4D <: Point3D <: Point2D
. A Point2D is not substitutable for a Point3D; it doesn't have enough fields.
The relationship between variance and mutation holds regardless of _who_ is doing the mutating. The implementation of a class can't break type safety any more than a user of the class can. The main interaction I see is that if a field is private and the implementation doesn't mutate it, then you know nobody does, and covariance is safe.
Structural subtyping has its uses, but it's a pain to be stuck with only structural subtyping. Many different abstractions can have the same data contents. Also, if two types are not meant to expose their internal fields, I definitely don't want changing the implementation of one to suddenly make it considered equal to the other. So I think that abstractions really require some nominative typing to back them up.
I think it's great that we have structurally subtyped tuples though, and there might be a use case for more structural subtyping. I couldn't access the paper you referenced so I don't know what kind of unification they are talking about, if it's something beyond what we have already. Anyway I think that we need a mix.
Maybe one reason that type theorists like structural subtyping is because it is harder to reason about, and thus more interesting :)
You're right, I got that backwards. But the point about ColorValue <: AlphaColorValue
still remains. Perhaps it is possible to get away with the opposite variance anyway. Theoretically, the negation trick shows that it is possible to reverse variances with quantification. And one could include an implied value for the missing record as part of the subtyping definition. So something like
type ColorValue <: AlphaColorValue with alpha = 1.0
would allow ColorValue
to be substitutable for AlphaColorValue
.
@jiahao Another simple solution is to rely on methods and define alpha(::ColorValue) = 1.0
. This suits nicely in a system where traits define a set of methods that a type needs to implement.
Any idea if Holy Traits are the same as traits in this paper ?
Well, if a _complete_ overhaul of the type system is called for, then maybe ditching subtype polymorphism in favor of something more Haskell-esque:
This is basically what Rust does.
This was discussed a bit over in https://github.com/JuliaLang/julia/issues/6975#issuecomment-61334259
(Edit: I also think this is probably not feasible.)
I'm not calling for a complete overhaul of the type system. I'm planning to do the specific changes I described at the top.
One of the core concepts of julia is to use the same universe of types everywhere, including dataflow type inference, which inherently involves approximate types. So subtype polymorphism is here to stay AFAICT. However this is not the same thing as _inheritance_. You can still have an Array{Union(Int,String)}
without any declared subtype relationships. Traits put type A <: B
on the chopping block, not subtype polymorphism.
I think the key feature of Holy Traits is that they're implemented using multiple dispatch instead of being a core feature. This also means there's an extra layer of indirection, since you can dispatch on the "value" of a trait instead of just its presence.
I also think subtyping has a bright future in general. For example see the dazzling recent work http://www.pps.univ-paris-diderot.fr/~gc/papers/polydeuces-part1.pdf
I just thought I bring it up after structural types were brought up. It is beyond me to really comment on relative merits. And I certainly do not think that swapping out an almost perfect type system in a relatively mature language is a good idea.
And yep, Holy Traits is an implementation of traits. Nowadays, with staged functions other implementations might be viable too.
(Edit: reading over this again I looked up the difference between subtyping and inheritance. See this for example)
I just found this thread through a couple of the linked issues, and got a little worried that this might remove one of the IMHO most powerful features of Julia; I really hope I'm misunderstanding the discussion here.
My concern is how foo(x::T, y::T)
is intended to behave after the changes proposed here. Is the intention that it _should_, or that it _should not_, match foo(3, 3.0)
? What happens if I have two methods;
foo(x::T, y::T) = println("diagonal")
foo(x,y) = println("different")
Is the intention that foo(2, 4.0)
should print "diagonal"
here, since there is _somewhere_ (above Any
) in the type hierarchy a type T
such that both 2::T
and 4.0::T
?
No, there is no intent to remove that at all.
Phew =) Thanks for a quick reply!
To expand on that a bit, the goal is
f{T}(x::T, y::T)
_does not_ match f(2, 4.0)
f{T}(x::A{T}, y::T)
_does_ match f(A{Number}(), 1)
This behavior is basically what we have now, and both cases are intuitive and very useful. The difficulty is in modeling this behavior. It seems clear that in the first case, T
ranges over only concrete types, and in the second case it ranges over all types.
The first problem is that "all concrete types <: S" denotes the exact same set of values as "S". So the set-theoretic model that works perfectly for the rest of the type system is not as helpful here. Deciding "is S equal to some concrete type?" is not so easy.
The second problem is coming up with notation for this; UnionAllConcrete
is a bit ugly.
The third problem is that we want to use UnionAllConcrete
automatically for the first method above, and UnionAll
for the second one. That's not so hard to do, but when described that way it seems rather ad-hoc. It would be nice to have a more elegant explanation.
Is there another reason for wanting f{T}(x::A{T}, y::T)
to match f(A{Number}(), 1)
than notation? In other words, isn't f{S<:T}(x::A{S}, y::T)
(specifically, f{T<:Number}(x::Array{T}, y::Number)
) the same thing as the new behavior?
This does make it a little more cumbersome to write your method of f
matching (Array{Number}, Int)
, but it's not impossible - and it doesn't introduce the admittedly ad-hoc distinction between when to use the different unions.
Yes, if we had to write it as f{S,T<:S}(x::A{S}, y::T)
then we could restrict all type variables in covariant position to concrete types, which would be simpler. That seems like a good trade-off.
+1 for this suggestion.
Any progress on this? I was thinking of taking a stab at #4774 by implementing a general wrapper
immutable Transpose{A}
a::A
end
but soon realized that I would need to be able to specify
typealias AbstractTransposeMatrix{T} @UnionAll A<:AbstractMatrix{T} Transpose{A}
Will these type of constructions be supported?
Cross-linking a conversation on julia-dev which referenced this issue as a potential solution to the current performance limitations in constructing types with more than 7 fields:
https://groups.google.com/d/msg/julia-dev/o4BZqLcTorg/sqj7IhR7jFgJ
Thanks for the cross-link. It is cool that this is also covered by this redesign. I hadn't grasped that before.
It's still not obvious to me that it is, which is why I inserted the cross-link. Best to make sure it's on the table!
Will this overhaul also fix the performance issues around higher-oder functions (there is no specific issue, I think ) and anonymous functions (#1864)?
How will a parametric types be ordered? Is there an order, always? Presumably this will this hold as a function with the left signature is more specific:
@UnionAll T<:Integer (T,T,T) <: @UnionAll T<:Integer (T,T,Integer)
I really don't think there is time for this in 0.4. It will take a while, and is mostly an algorithmic improvement and new syntax, and not a large breaking change.
Unfortunately, I think something like this is more or less essential in order to implement a Transpose
immutable type, as per #4774.
Could you elaborate on why? I've been tempted to voice a bound-to-be-unpopular opinion that we're running out of time to complete #4774, with enough buffer to make sure it's actually working well, in time for 0.4.0.
The current leading contender for #4774 are separate Transpose
and Conjugate
types, which would mean that we're going to end up with large hierarchies of nested types. As an example, in the LAPACK code, we will want to dispatch on statements such as "a matrix with Complex{T<:Union{Float32,Float64}}
entries, its transpose, or its conjugate transpose".
Having now read through chunks of Jeff's thesis and spent some time mucking around jltypes.c
, I'll add a few thoughts (for the 0.5-dev cycle, obviously):
typeintersect(A,B) = typeintersect(B,A)
and is_morespecific(A,B) iff is_morespecific(WrapperType{A}, WrapperType{B})
as two examples. I gather that it may not be possible or straightforward to uniquely specify the behavior of Julia's entire type system in terms of such axioms, but where available such axioms seem like outstanding fodder for building a really comprehensive test suite. (See 4207198683c0732c6402a3b68527425b456c927d for a move in that direction.) The closer we come to spelling out behavior this way, the clearer it may be to users and the easier it might be for not-Jeff developers who, say, spend some time modifying/breaking the type system :smile: and who sometimes wonder what the correct behavior should be (#11242).core.jl
and some tests for subtyping, specificity isn't, as far as I can tell, tested at all. That means we learn about method-sorting problems indirectly, by failures elsewhere, which makes them quite difficult to debug.jl_subtype_le
, type_morespecific
, and type_match_
), we could get away with 1 or 2. The current jl_subtype_le
is not diagonal, and I'm wondering if that should change.Finally, I have much more intimate experience now with the fact that our current TypeVars are hard to reason about, and I therefore applaud the goal of cleaning this up.
These wishlist items are not idle demands; time permitting (and especially when it comes to testing), I'm happy to help write code. I kind of think we should emphasize writing the tests early in the process, since the test suite will presumably help its development.
AFAIK, typeintersect(A, B) == typeintersect(B, A)
is supposed to be true and apparently when it is not the case it worth mention @JeffBezanson by name
Edit: didn't noticed that the commit you refer to is trying to address that issue.
Yes (to your edited comment), I thought it was worth testing that basic assumption directly.
That portion of my proposal is essentially stating that I think we should add many more such tests. The risk is that it might result in a core
test suite that takes a long time to run (particularly if we compose the axioms), but there will be a lot of benefit, too. The type-related algorithms are pretty complicated; in a conceptually-related task (for developing some of the trickier parts of SubArrays, particularly, the linear-indexing dimensionality inference part) I found it essential to test all possible combinations of index types. To me it seems that our type-related test suite is a little on the sparse side. (This doesn't really show up in Coveralls, though, because all that code is implemented in C.)
I was actually thinking about sth very similar recently.
It would be nice to have some rules to test the implementation against. And maybe also to run combinations of types operations on a set of types to test the validity of the result (although this kind of requires an independent (possibly slower) way to examing the results...).
There was quite a bit of discussion on this over in https://github.com/JuliaLang/julia/issues/11310#issuecomment-170421099
When this gets picked up again, one thing worth mentioning: lately, I have been wondering whether a good way to fix many problems with compilation time might be "abstract type inference." The idea would be that when possible, you express the _result_ of type inference on a method as a set of @pure
operations on types. That way, in many cases you'd only have to run type inference once and then you look up the answer each time you want to create a specialization.
For example, in running inference on
getindex{T,N}(A::Array{T,N}, i) = arrayref(A, i)
you might like to express the return value as ::T
"generically" rather than as whatever specific T
was used for A
.
In a more complicated case like
function (+){S,T,N}(::A::Array{S,N}, B::Array{T,N})
R = promote_op(+, S, T)
# check sizes...
out = Array{R}(size(A))
for I in ...
end
out
end
then you'd probably like R
to be annotated ::Type{promote_op(+, S, T)}
.
I mention this here because this seems to require the ability to express a type in terms of functions.
EDIT: if this could be pulled off, if nothing else it seems likely to speed the test suite: some of the slowest tests, e.g. subarray and the linalg suite, are slow precisely because they run the same methods with many different input types.
Having read through this thread, I'm still not 100% sure of the desired behavior. I got:
f{T}(::T, ::T)
should not match (Int,Float64)
.f{T}(::Array{T}, ::T)
should match (Array{Number}, Float64)
(with T==Number
).But
f{T}(::Array{T}, ::T, ::T)
match (Array{Number}, Int, Float64)
(with T==Number
)?If the second matches, I don't see how the third could not match (although I'm not sold on this behavior).
I definitely think the second shouldn't match. I like the version in https://github.com/JuliaLang/julia/issues/8974#issuecomment-68657184, and it seems Jeff does too.
Ah, yes, sorry, I had missed https://github.com/JuliaLang/julia/issues/8974#issuecomment-68654723, my bad. Makes a lot more sense that way. Carry on.
So will this improve Union type performance?
I came across this recently while using a lot of nested types: I'm not sure if it's something that you're planning on handling or not, but it might be a useful test case:
julia> immutable Foo{T}
x::T
end
julia> immutable Bar{T} end
julia> typealias FooBar{T} Foo{Bar{T}}
Foo{Bar{T}}
julia> FooBar{Float64} <: FooBar
false
It's been a minor annoyance, as I have to write f{T}(x::FooBar{T}) = ....
instead of f(x::FooBar) = ...
.
Yes, the new subtyping algorithm should fix this.
closed by #18457
Most helpful comment
closed by #18457