SciMLBenchmarks.jl
SciMLBenchmarks.jl copied to clipboard
Add MethodOfLines Benchmark
Adds a benchmark for MethodOfLines.jl, using Burgers Equation.
You should extend https://benchmarks.sciml.ai/html/MOLPDE/burgers_fdm_wpd.html to show the performance difference against the other formulation as well.
@staticfloat or @anandijain , the hash thing is back?
I fixed a couple problems in https://github.com/SciML/SciMLBenchmarks.jl/pull/426, merge that then rebase this on top of master
.
Let's get this updated. See https://benchmarks.sciml.ai/dev/StiffODE/Bruss/. We can now have multiple problems and allow for a :prob_choice
in the benchmark. It would be interesting to update this to be alternative prob choices on the FDM ones and see how MOL.jl performs against the hand-written discretizations.
I already have the working bruss code, let me see how to integrate this.
Should I also benchmark discretization time?
I expect WorkPrecisionSet
will need special handling for pdesol, I'm just passing the original_sol at the moment. I will duplicate all the MTKitize algs as that's the most equivalent
@ChrisRackauckas how is bruss looking now? And will unused prob kwargs be passed down to the solve, or do I need to do something special? Need to avoid wrapping.
Add it as a 4th version in Bruss. Don't delete the fastprob one.
@ChrisRackauckas it isn't deleted, it is a 4th prob and 4th test_sol
. You may have been looking at burgers, which i have rolled back as its awaiting SplitODESystem
.
But you didn't add it to the wp setups
@ChrisRackauckas looking at the wp setups, (1) I notice that fastbruss is not included everywhere, does this need fixing? Else, should I not include MOL in setups where fastbruss is omitted?
(2) In addition in 2 of the setups only the mtk prob is used - is this correct?
(3) Finally in the last wp setup, the name specifies that the final problem in the set is MTK, but there is no :prob_choice = 2
for this setup. Is this correct?
Most of the different problem checking is done in just the single test. That is because that's for problem formulation, while the other tests of solvers. However, there is the confounding fact that GMRES benefits a lot less from having a fast symbolic Jacobian definition than the others (because it doesn't use it), and so the balance between solvers can be dependent on the problem formulation. For that reason, there's some cases where the modelingtoolkit version is included in the solver tests to showcase this effect, and notably that uses jac=true, sparse=true
, and that's why you see a major difference with that included (specifically with the KLU version).
Failures look like typos
Looks like the fails were caused by MethodOfLines not being in the manifest, I have run Pkg.resolve()
The benchmarks are running on 1.7, shouldn't this be updated to 1.8?
Rebase for v1.8
Rebased
need to update the manifest
In the manifest:
julia_version = "1.8.1"
You mean this should be 1.8.2, or omitted? There are examples of both styles across the package
it should be v1.8.2 to match the CI machine. I think that's why it's greyed out and I cannot unblock the runs.
There's failures in a good chunk of these.
Does WPSet pass down kwargs to the solve? Need to set wrap=Val(false)
It does. If it's a problem-specific thing, then the setup will too.
Builds failed.
@xtalax status here?
Status is that I can't work out why this is still failing, could copy out the code and project and try to run manually to debug
@ChrisRackauckas this should be done now
Okay let's continue the MethodOfLinesPDE set in a later PR