-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
snoop precompile and package load times #122
Conversation
Yes, we're still missing a step in the JuMP->Solver compilation chain, because solvers don't have JuMP as a dependency and JuMP doesn't have the solvers as a dependency. If you made a ClarabelJuMP.jl package, then you could eliminate the compilation latency. |
Codecov ReportPatch coverage:
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more Additional details and impacted files@@ Coverage Diff @@
## main jump-dev/JuMP.jl#122 +/- ##
==========================================
- Coverage 82.22% 80.73% -1.50%
==========================================
Files 37 39 +2
Lines 2701 2839 +138
==========================================
+ Hits 2221 2292 +71
- Misses 480 547 +67
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
Improves TTFX by :
--
Pkg.jl
was only being used to get the version number fromProject.toml
. Now done with the lighter weightTOML.jl
.--
DataFrames
/PrettyTables
were only used to print the solver settings - removed in favor of a handwrittenBase.show
that produces the same output.--
Statistics
was removed since it was only used formean
.Load time for the package is now quite a bit shorter, particularly if MathOptInterface is already loaded. The only remaining slow loading dependency is StaticArrays.jl, which we require for exponential and power cone problems.
TTFX for MOI / JuMP is much better but still a few seconds. I have, I think, done more or less the same as that suggested by @odow in this discussion but using JuMP still seems to result in a lot of compilation time around the JuMP
optimize!
in particular.Example:
One strange thing is that loading JuMP seems to partly invalidate the precompilation relating to MOI. The MathOptInterface compilation is done via the function
__precompile_moi()
. Making a fresh start of julia, this happens:In other words, the presence of JuMP in the environment seems to force substantial recompilation even if it is not used.
If I then actually run a small JuMP example, I get something like this:
About 1/3 of that time seems to come from the
@variable
macro, and most of the rest fromoptimize!(model)
.