It has been discussed before that some @infer_the_crap_out_of_this
would be very useful:
https://github.com/JuliaLang/julia/pull/16734#issuecomment-223842270
The MAX_TUPLETYPE_LEN
is low enough that it's pretty easy to run into issues with it. When any code generation or optimization is occurring, the generated code tends to have these issues. See:
https://github.com/JuliaLang/julia/issues/22255
https://github.com/JuliaLang/julia/issues/22275
The problem there seems to be that even if you put parenthesis to avoid this issue, @fastmath
can re-order operations and broadcast generates its own anonymous functions which might do this, meaning that you're prone to problems with any large line. In addition I see that many times types with large numbers of fields/parameters won't infer well:
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L284
and don't have an MWE for that yet but believe something similar is going on here.
It would be nice to be able to selectively tell the compiler "I know this line is longer than normal, please handle it anyways". If anything, it would at least make debugging easier if there was a way to selectively change this, i.e. a one-word macro to test "yes, this is an inference issue due to the recursion limit".
+1. I tested making it larger and the tests seemed to be fine with it.
diff --git a/base/inference.jl b/base/inference.jl
index ecb08f7..885de00 100644
--- a/base/inference.jl
+++ b/base/inference.jl
@@ -4,7 +4,7 @@ import Core: _apply, svec, apply_type, Builtin, IntrinsicFunction, MethodInstanc
#### parameters limiting potentially-infinite types ####
const MAX_TYPEUNION_LEN = 3
-const MAX_TYPE_DEPTH = 8
+const MAX_TYPE_DEPTH = 9001
const TUPLE_COMPLEXITY_LIMIT_DEPTH = 3
const MAX_INLINE_CONST_SIZE = 256
@@ -27,7 +27,7 @@ struct InferenceParams
function InferenceParams(world::UInt;
inlining::Bool = inlining_enabled(),
max_methods::Int = 4,
- tupletype_len::Int = 15,
+ tupletype_len::Int = 9001,
tuple_depth::Int = 4,
tuple_splat::Int = 16,
union_splitting::Int = 4,
9001 is way better than 42. inference.jl
has come a long way in a short time!
I was going to say something like what Tim said. I'm really glad Julia compiles and runs with this.
I also like the @infer_harder
idea. Though sometimes I wonder with all the improvements to inference correctness, whether widening should only trigger for recursions and not for tree-like call graphs (and making, amongst other things, Varargs
specialization generally work when not a part of a recursive loop)
https://github.com/JuliaLang/julia/issues/22389 likely falls falls into the same group of issues. I put in a PR which raises some numbers that seem to matter for the linked issues. I am admittedly not very knowledgeable of the internals in this area though.
The issue this was originally for was fixed via https://github.com/JuliaLang/julia/pull/27398 . But there are still some other inference limits which can be involved like the splat limit, so I changed the title to reflect the possibility of wanting to possibly change other limits.
That's not actionable
Why not? Is the macro described impossible to implement? Is this a Wont-Fix?
No, it's just not actionable. The original problem was badly stated. It proposed a solution, but didn't properly state a problem. We fixed the stated problem (MAX_TUPLETYPE_LEN too low) by removing that limit entirely.
Most helpful comment
+1. I tested making it larger and the tests seemed to be fine with it.