ModelingToolkit.jl icon indicating copy to clipboard operation
ModelingToolkit.jl copied to clipboard

Add tests of various features

Open TorkelE opened this issue 1 year ago • 7 comments

Ok, I have tried to rewrite various tests in Catalyst to work within MTK (a bit more complicated since each system have to be generated directly, and cannot be generated from a single ReactionSystem model. These tests are primarily going through vaiours ways to interact with SciML structures, and giving input to problems, across various system types.

I have tried to go through all broken tests, and mark issues that are related. Unfortunately there are quite a few, so someone else would probably need to have a second look at each to check the issues are accurate.

TorkelE avatar Jun 15 '24 16:06 TorkelE

The tests fail?

ChrisRackauckas avatar Jun 15 '24 17:06 ChrisRackauckas

Not too surprised. I will have a look locally (the test log is too large to actually see what is going on)

TorkelE avatar Jun 15 '24 18:06 TorkelE

Ok, tests pass now (except for one downstream test)

TorkelE avatar Jun 15 '24 20:06 TorkelE

I should have mentioned: I'll move the tests when I PR to SciMLBase to fix the bugs

AayushSabharwal avatar Jun 16 '24 19:06 AayushSabharwal

I should have mentioned: I'll move the tests when I PR to SciMLBase to fix the bugs

Thanks!

TorkelE avatar Jun 16 '24 19:06 TorkelE

Yeah, I am a bit uncertain what actually goes where now when the MTK pipeline have been split up in so many different places. To me it is basically tests of the System types that MTK implements. But I agree that if possible it would make sense to move them as far internal as possible. If you point out what and where I could do it, but might be easier for you to do it yourself.

TorkelE avatar Jun 16 '24 19:06 TorkelE

For the save_idx I think it is just that it has been broken since the very beginning. It is something that should be supported, so don't think it is bad to keep the tests in until it actually gets fixed, even if it might be a while.

TorkelE avatar Jun 16 '24 19:06 TorkelE

Are we sure that the modified version of these that got ported to SciMLBase are actually exhaustive? I.e. regression that I caught with these tests in Catalyst still appear in the ecosystem: https://github.com/SciML/ModelingToolkit.jl/issues/2838

TorkelE avatar Jul 02 '24 00:07 TorkelE

I have modified this one, removing the SciMLBase related stuff, and also adding tests that static vectors can be used as input across all problem types. @AayushSabharwal

TorkelE avatar Jul 06 '24 22:07 TorkelE

Haven't heard anything in a while, can we merge this @AayushSabharwal @ChrisRackauckas ?

TorkelE avatar Jul 21 '24 15:07 TorkelE

This needs a bit of work. All of the @test_brokens are not setup correctly and the actual tests are commented.

ChrisRackauckas avatar Jul 21 '24 15:07 ChrisRackauckas

So, there are a large number of cases where I do

    @test_broken false # Does not work for certain inputs, likely related to https://github.com/SciML/ModelingToolkit.jl/issues/2804.
    for u0 in u0_alts_vec, p in p_alts_vec
        oprob = remake(base_oprob; u0, p)
        # @test base_sol == solve(oprob, Tsit5(); saveat = 1.0)
        eprob = remake(base_eprob; u0, p)
        # @test base_esol == solve(eprob, Tsit5(); trajectories = 2, saveat = 1.0)
    end

because some of the tests in the loop are broken, and some are not. Then just doing either @test or @test_broken will cause test failure. Separating the cases will cause tons of line to be written. Hence I comment out the actual test lines, and then write

@test_broken false

to record in the test file that there is something going on, so that the case where something is broken is lost. Similar in cases where the full test is commented out (as in some cases something in the build up fails, and sometimes not).

TorkelE avatar Jul 21 '24 15:07 TorkelE

Then why is it important for this to merge if the test broken isn't actually set to tell us if they are still broken? That's the only point of @test_broken.

ChrisRackauckas avatar Jul 21 '24 16:07 ChrisRackauckas

It is meant as a reminder when you run the tests, it tells you that there are some broken tests and where (and links the issue).

Do you want me to just remove the @test_broken and comment out the relevant bits?

TorkelE avatar Jul 21 '24 18:07 TorkelE

@ChrisRackauckas I removed the stuff with the broken test, we can add them back later when they work. This should be good now. @AayushSabharwal do you have any last comments?

TorkelE avatar Jul 26 '24 18:07 TorkelE

Using @test_broken is fine, but the way you did it would not give an appropriate error when the test is fixed, and so it wasn't actually useful. It would be good to add correct @test_brokens where appropriate.

ChrisRackauckas avatar Jul 27 '24 03:07 ChrisRackauckas