Fix 2326
This MR does three things:
- Changes the order of some clauses that some if statements check. Each operation there is cheap and in general it doesn't matter, but it can be called millions of times, so extreme cases exists where this can help performance.
- Adds a limit on the number of vubs / vlbs (
x <= Mz + b, wherexis some variable,yis some binary, andM /bare some real values). I made the upper limit fairly large so I only see this affecting extreme cases, but the performance should still definitely be checked. - There was an error in calculating the amount of "effort" / "work" a submip did. After this fix heuristics will essentially be given a larger budget as previously the "work" was being overestimated. I doubt the previous limit was being hit frequently, but for instances where it was then this is a potentially heavy performance change.
Edit: This resolves #2326
Codecov Report
:white_check_mark: All modified and coverable lines are covered by tests.
:white_check_mark: Project coverage is 79.11%. Comparing base (654b38e) to head (d8d4964).
:warning: Report is 283 commits behind head on latest.
Additional details and impacted files
@@ Coverage Diff @@
## latest #2373 +/- ##
==========================================
+ Coverage 79.07% 79.11% +0.03%
==========================================
Files 346 346
Lines 84912 84935 +23
==========================================
+ Hits 67147 67195 +48
+ Misses 17765 17740 -25
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
LGTM, assuming the regression tests are OK
The documentation fix is awaiting review in Documenter.jl
The documentation fix is awaiting review in Documenter.jl
Meanwhile, in #2360 I've switched off link checking
@jajhall testing by @fwesselm shows some minor performance improvements. Is always nice for a bug fix. I'd be for merging it now.
I'm going to leave a comment if we ever need to revisit this PR too: When I wrote this I decided on a big-M value of 5000000 + 10 * ncols for how many items can be stored in the variable bound tree data structure. That helps avoid some worst-case scenarios, where the solve time is dominated by inserting elements into the tree. I believe the value's large enough to not affect any reasonable solve, although it's potentially possible that some instance out there might benefit from these extreme values.