-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Locally change or ignore inference limits #22370
Comments
+1. I tested making it larger and the tests seemed to be fine with it. diff --git a/base/inference.jl b/base/inference.jl
index ecb08f7..885de00 100644
--- a/base/inference.jl
+++ b/base/inference.jl
@@ -4,7 +4,7 @@ import Core: _apply, svec, apply_type, Builtin, IntrinsicFunction, MethodInstanc
#### parameters limiting potentially-infinite types ####
const MAX_TYPEUNION_LEN = 3
-const MAX_TYPE_DEPTH = 8
+const MAX_TYPE_DEPTH = 9001
const TUPLE_COMPLEXITY_LIMIT_DEPTH = 3
const MAX_INLINE_CONST_SIZE = 256
@@ -27,7 +27,7 @@ struct InferenceParams
function InferenceParams(world::UInt;
inlining::Bool = inlining_enabled(),
max_methods::Int = 4,
- tupletype_len::Int = 15,
+ tupletype_len::Int = 9001,
tuple_depth::Int = 4,
tuple_splat::Int = 16,
union_splitting::Int = 4, |
9001 is way better than 42. |
I was going to say something like what Tim said. I'm really glad Julia compiles and runs with this. I also like the |
One proposal for JuliaLang#22370
One proposal for JuliaLang#22370
#22389 likely falls falls into the same group of issues. I put in a PR which raises some numbers that seem to matter for the linked issues. I am admittedly not very knowledgeable of the internals in this area though. |
The issue this was originally for was fixed via #27398 . But there are still some other inference limits which can be involved like the splat limit, so I changed the title to reflect the possibility of wanting to possibly change other limits. |
That's not actionable |
Why not? Is the macro described impossible to implement? Is this a Wont-Fix? |
No, it's just not actionable. The original problem was badly stated. It proposed a solution, but didn't properly state a problem. We fixed the stated problem (MAX_TUPLETYPE_LEN too low) by removing that limit entirely. |
It has been discussed before that some
@infer_the_crap_out_of_this
would be very useful:#16734 (comment)
The
MAX_TUPLETYPE_LEN
is low enough that it's pretty easy to run into issues with it. When any code generation or optimization is occurring, the generated code tends to have these issues. See:#22255
#22275
The problem there seems to be that even if you put parenthesis to avoid this issue,
@fastmath
can re-order operations and broadcast generates its own anonymous functions which might do this, meaning that you're prone to problems with any large line. In addition I see that many times types with large numbers of fields/parameters won't infer well:https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L284
and don't have an MWE for that yet but believe something similar is going on here.
It would be nice to be able to selectively tell the compiler "I know this line is longer than normal, please handle it anyways". If anything, it would at least make debugging easier if there was a way to selectively change this, i.e. a one-word macro to test "yes, this is an inference issue due to the recursion limit".
The text was updated successfully, but these errors were encountered: