Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference: ~~refine~~ replace recursion non-detection algorithm #23912

Merged
merged 5 commits into from
Oct 10, 2017

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Sep 28, 2017

Refinement Replacement of #21933 recursion detection and handling.

The general idea here of the first commit is that it is sufficient to prevent recursion complexity growth over entire edges. Previously, we only considered the callee vertex when detecting recursive method calls. However, it should also be valid to consider both vertexes. But considering the callee also, we can significantly increase the range of cases that we can allow inference to consider, without triggering the recursion detection limiter. This should be generally sufficient to ensure we can infer "bottleneck"-type functions, without significantly sacrificing our convergence requirements.

For an example of when this new heuristic comes into effect, consider print: a significant number of these methods are reached from calling vararg print and themselves call vararg print.

print(arg...) = foreach(print, arg)
print(arg) = print("hi", 1, 2, 3)

Naively, those recursive vararg print calls have no obvious complexity bound on their own, and thus can't be inferred. However, when we add in the consideration of this PR that each one has a different caller method, that doesn’t need to be detected as recursion.

Thus, this may incidentally should fix #23371 (though my actually reason for this is for some other WIP I have locally).

Update:
The remaining commits in the PR make it harder to trigger the unbounded recursion detection, but increase the effective penalty for doing so. Now inference runs in one of two modes: limited or unlimited. This allows us to cache the limited and unlimited inference results separately.

@vtjnash vtjnash added the compiler:inference Type inference label Sep 28, 2017
@@ -5463,6 +5464,7 @@ t4 = vcat(A23567, t2, t3)
@test t4[11:15] == A23567

for U in unboxedunions
Base.unionlen(U) > 5 && continue # larger values cause subtyping to crash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related? If so, how? Does the better inference perform more/other subtype checks that now trigger the crash?

&& !istopfunction(tm, f, :setindex!)
# the construct-to-convert method is a bottleneck in inference,
# so just assume that recursion will get prevented at some other point
&& !(method.sig == Tuple{Type, Any}))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯 for being able to remove all those special cases!

@vtjnash
Copy link
Member Author

vtjnash commented Sep 28, 2017

@nanosoldier runbenchmarks(ALL, vs=":master")

@JeffBezanson
Copy link
Member

Still seems to be an inference problem on AV. Oh well.

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@JeffBezanson
Copy link
Member

I've seen the exact same inference issue on AV in #23827 and #23273.

@vtjnash vtjnash force-pushed the jn/infer-norecur-more branch from ea5bce5 to 5a4dd99 Compare October 4, 2017 04:50
@vtjnash vtjnash changed the title inference: refine recursion non-detection inference: ~~refine~~ replace recursion non-detection algorithm Oct 4, 2017
@vtjnash
Copy link
Member Author

vtjnash commented Oct 4, 2017

@nanosoldier runbenchmarks(ALL, vs=":master")

@martinholters
Copy link
Member

A few words on the changes to the type complexity limiting would be appreciated.

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@vtjnash
Copy link
Member Author

vtjnash commented Oct 5, 2017

The main change is just that I relaxed the rules for unbounded recursion detection. For example, the hard cutoff on depth and length is gone. As a trade-off, the handling of recursion now limits more aggressively. Overall, that should mean this is actually friendlier to code that wants to recursively reduce some tuple (as demonstrated by the nanosoldier results), while being faster to handle cases where there is "accidental" recursion (Typical cases were I see this are from depwarn – due to a couple deprecated primitive functions – and the REPL code – due to specializations of refresh_multi_line. Also the inference code – since it is indeed a heavily recursive algorithm.)

…riggering recursion detection

This is generally sufficient to ensure we can infer "bottleneck"-type functions,
without significantly sacrificing our convergence requirements
These should now be covered by the improved recursion detector.
remove code to handle exponential blowup,
since there isn't any
@vtjnash vtjnash force-pushed the jn/infer-norecur-more branch from 5a4dd99 to b89e88e Compare October 9, 2017 15:52
@vtjnash
Copy link
Member Author

vtjnash commented Oct 9, 2017

All passing finally (except for new flakiness in sprand tests). Will merge tomorrow.

@vtjnash vtjnash merged commit 10d470d into master Oct 10, 2017
@vtjnash vtjnash deleted the jn/infer-norecur-more branch October 10, 2017 15:19
@ChrisRackauckas
Copy link
Member

Does this fix #22255 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

intermittent type inference of Any in CI
5 participants