Add tests of various features
Ok, I have tried to rewrite various tests in Catalyst to work within MTK (a bit more complicated since each system have to be generated directly, and cannot be generated from a single ReactionSystem model. These tests are primarily going through vaiours ways to interact with SciML structures, and giving input to problems, across various system types.
I have tried to go through all broken tests, and mark issues that are related. Unfortunately there are quite a few, so someone else would probably need to have a second look at each to check the issues are accurate.
The tests fail?
Not too surprised. I will have a look locally (the test log is too large to actually see what is going on)
Ok, tests pass now (except for one downstream test)
I should have mentioned: I'll move the tests when I PR to SciMLBase to fix the bugs
I should have mentioned: I'll move the tests when I PR to SciMLBase to fix the bugs
Thanks!
Yeah, I am a bit uncertain what actually goes where now when the MTK pipeline have been split up in so many different places. To me it is basically tests of the System types that MTK implements. But I agree that if possible it would make sense to move them as far internal as possible. If you point out what and where I could do it, but might be easier for you to do it yourself.
For the save_idx I think it is just that it has been broken since the very beginning. It is something that should be supported, so don't think it is bad to keep the tests in until it actually gets fixed, even if it might be a while.
Are we sure that the modified version of these that got ported to SciMLBase are actually exhaustive? I.e. regression that I caught with these tests in Catalyst still appear in the ecosystem: https://github.com/SciML/ModelingToolkit.jl/issues/2838
I have modified this one, removing the SciMLBase related stuff, and also adding tests that static vectors can be used as input across all problem types. @AayushSabharwal
Haven't heard anything in a while, can we merge this @AayushSabharwal @ChrisRackauckas ?
This needs a bit of work. All of the @test_brokens are not setup correctly and the actual tests are commented.
So, there are a large number of cases where I do
@test_broken false # Does not work for certain inputs, likely related to https://github.com/SciML/ModelingToolkit.jl/issues/2804.
for u0 in u0_alts_vec, p in p_alts_vec
oprob = remake(base_oprob; u0, p)
# @test base_sol == solve(oprob, Tsit5(); saveat = 1.0)
eprob = remake(base_eprob; u0, p)
# @test base_esol == solve(eprob, Tsit5(); trajectories = 2, saveat = 1.0)
end
because some of the tests in the loop are broken, and some are not. Then just doing either @test or @test_broken will cause test failure. Separating the cases will cause tons of line to be written. Hence I comment out the actual test lines, and then write
@test_broken false
to record in the test file that there is something going on, so that the case where something is broken is lost. Similar in cases where the full test is commented out (as in some cases something in the build up fails, and sometimes not).
Then why is it important for this to merge if the test broken isn't actually set to tell us if they are still broken? That's the only point of @test_broken.
It is meant as a reminder when you run the tests, it tells you that there are some broken tests and where (and links the issue).
Do you want me to just remove the @test_broken and comment out the relevant bits?
@ChrisRackauckas I removed the stuff with the broken test, we can add them back later when they work. This should be good now. @AayushSabharwal do you have any last comments?
Using @test_broken is fine, but the way you did it would not give an appropriate error when the test is fixed, and so it wasn't actually useful. It would be good to add correct @test_brokens where appropriate.