A surprising way to trigger a surprising error:
julia> ""'
julia: /home/jeff/src/julia/src/codegen.cpp:1378: jl_generic_fptr_t jl_generate_fptr(jl_method_instance_t*, const char*, size_t): Assertion `fptr.fptr != __null' failed.
signal (6): Aborted
in expression starting at no file:0
raise at /build/buildd/eglibc-2.19/signal/../nptl/sysdeps/unix/sysv/linux/raise.c:56
abort at /build/buildd/eglibc-2.19/stdlib/abort.c:89
__assert_fail_base at /build/buildd/eglibc-2.19/assert/assert.c:92
__assert_fail at /build/buildd/eglibc-2.19/assert/assert.c:101
jl_generate_fptr at /home/jeff/src/julia/src/codegen.cpp:1378
jl_compile_method_internal at /home/jeff/src/julia/src/julia_internal.h:367 [inlined]
jl_call_method_internal at /home/jeff/src/julia/src/julia_internal.h:395 [inlined]
jl_apply_generic at /home/jeff/src/julia/src/gf.c:2081
inline_call at ./inference.jl:5361
inline_expr at ./inference.jl:5273
jl_call_fptr_internal at /home/jeff/src/julia/src/julia_internal.h:380 [inlined]
jl_call_method_internal at /home/jeff/src/julia/src/julia_internal.h:399 [inlined]
jl_apply_generic at /home/jeff/src/julia/src/gf.c:2081
inline_expr at ./inference.jl:5277
inlining_pass! at ./inference.jl:5257
optimize at ./inference.jl:3759
Note, this is almost certainly due to someone thinking they would be clever and mark the very-not-pure function Core.Inference.return_type
as @pure
at https://github.com/JuliaLang/julia/blob/5fafb36a8da892f51ee32d886f8f3995719f97cc/base/linalg/adjtrans.jl#L26
As I mentioned in the relevant PR, that @pure
annotation was preexisting. If that annotation should not exist and can be removed without causing regressions, please feel welcome to do so. Best!
Just added PR https://github.com/JuliaLang/julia/pull/25265, which solves the issue in this particular case. But is is a hack; I am not sure, if it is possible to find other types besides AbstractString
, which give the same behaviour.
IMO, there is a design error, which is manifested in the following text: from https://github.com/JuliaLang/julia/blob/master/base/linalg/adjtrans.jl
# note that Adjoint and Transpose must be able to wrap not only vectors and matrices
# but also factorizations, rotations, and other linear algebra objects, including
# user-defined such objects. so do not restrict the wrapped type.
struct Adjoint{T,S} <: AbstractMatrix{T}
parent::S
...
end
Maybe we should define a new abstract type Transposable
or so, and make all the vectors and matrices
but also factorizations, rotations, and other linear algebra objects
inherit from that (replacing inheritance from Any
). That would allow to prevent the Adjoint
and Transpose
- constructors to be called non-transposable in the followin way:
Adjoint(A<:Transposable) = Adjoint{transformtype(Adjoint,eltype(A)),typeof(A)}(A)
Transpose(A<:Transposable) = Transpose{transformtype(Transpose,eltype(A)),typeof(A)}(A)
Other types like String
would just produce an error.
If that annotation should not exist and can be removed without causing regressions, please feel welcome to do so.
That annotation must not exist (unless you enjoy very quickly getting the wrong answer or crashing).
Again, please feel welcome to remove that annotation @vtjnash. Doing so falls outside the set of things I am focused on at the moment. Best!
Just added PR #25265, which solves the issue in this particular case. But is is a hack; I am not sure, if it is possible to find other types besides AbstractString, which give the same behaviour. IMO, there is a design error, which is manifested in the following text [...]
Much thanks for looking into and thinking about these issues @KlausC! :) The present wrapper design indeed has some limitations; some discussion of such limitations and associated tradeoffs exists in #24969, e.g. the last paragraph in https://github.com/JuliaLang/julia/pull/24969#issuecomment-350591284 and discussion elsewhere in that thread of opt-in versus opt-out behavior (similar in principle to what you suggest above). Best!
There are some test cases broken - need rework.
@vtjnash The call to return_type
in the function marked @pure
was previously (i.e., for all of 0.6) under a call to this method:
if isdefined(Core, :Inference)
_default_eltype(itrt::ANY) = Core.Inference.return_type(first, Tuple{itrt})
else
_default_eltype(itr::ANY) = Any
end
Does the ANY
function barrier prevent this from being an issue with @pure
functions?
No. Nospecialize is a compiler barrier and doesn鈥檛 affect inference. (Also, only one of several reasons this function isn鈥檛 pure)
# note that Adjoint and Transpose must be able to wrap not only vectors and matrices
# but also factorizations, rotations, and other linear algebra objects, including
# user-defined such objects. so do not restrict the wrapped type.
struct Adjoint{T,S} <: AbstractMatrix{T}
parent::S
...
end
@Sacha0 I might be wrong, but I'm wondering if it's better to have S <: AbstractMatOrVec
and let adjoint(...)
of factorizations/etc do some immediate, internal (possibly lazy) thing? E.g. the adjoint of an SVD factorization is pretty trivial - just swap U
and Vt
. The adjoint of lower triangualar should be upper triangular (the Adjoint
wrapper could potentially apply to the parent). Etc. Of course, I'm not certain what works best for dispatching all the multiplications, etc...
@andyferris I wondered the same at the outset, and continue to grapple with a few related design questions :).
I initially constrained S<:AbstractVecOrMat
, but found successive loosenings necessary (first allowing S<:Factorization
, then allowing S<:AbstractRotation
, and so on). After a few such rounds and some thought, I concluded that providing A[ct]_(mul|ldiv|rdiv)_B[ct][!]
's functionality via mul!
/ldiv!
/rdiv!
+Adjoint
/Transpose
--- and allowing users to extend that functionality to objects that may not subtype any of the relevant types mentioned above --- requires either absence of constraint on S
or, for example, an associated trait system and explicit declarations. The former is practicable and pleasant to work with now, the latter less so. Hence the present design.
Re. allowing adjoint(A)
/transpose(A)
to return something other than Adjoint(A)
/Transpose(A)
, for example whether to allow transpose(S::SymTridiagonal) = S
, I agree that flexibility seems advantageous (with caveat below) and am pursuing that path while transitioning Adjoint
/Transpose
's present semantics to adjoint
/transpose
. The caveat is that adjoint
/transpose
should be uniformly lazy, by which I mean, e.g., that mul!(adjoint(C), A, B)
should always mutate C
appropriately. So far this approach is working out reasonably in practice :).
by which I mean, e.g., that
mul!(adjoint(C), A, B)
should always mutateC
appropriately
Yes I like this definition, rather than requiring an Adjoint
wrapper in particular. For instance, it might be best for (immutable) static matrices or a Rotations.Rotation
to do a greedy adjoint
- this can't really break any code that assumes adjoint
is lazy if C
isn't mutable in the first place.
almost certainly due to someone thinking they would be clever and mark the very-not-pure function
Core.Inference.return_type
as@pure
Almost certainly me thinking I was clever. See #25274.
Most helpful comment
Note, this is almost certainly due to someone thinking they would be clever and mark the very-not-pure function
Core.Inference.return_type
as@pure
at https://github.com/JuliaLang/julia/blob/5fafb36a8da892f51ee32d886f8f3995719f97cc/base/linalg/adjtrans.jl#L26