first try at adding a Seeding dynamic
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 84.61%. Comparing base (
723e62c) to head (794243c). Report is 4 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #1367 +/- ##
==========================================
+ Coverage 84.30% 84.61% +0.30%
==========================================
Files 373 375 +2
Lines 9160 9247 +87
==========================================
+ Hits 7722 7824 +102
+ Misses 1438 1423 -15
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Some sanity check tests to add in might be 1) checking that turning on Seeding dynamic but having time_window=0 yields the same results as no seeding, 2) checking that all particles have multiplicity > 0 by end of seeding window, and then a more physical check would be to check that the droplet number concentrations are larger and the droplet sizes are smaller when you have seeding.
Hopefully these will address the warnings from Codecov :)
In https://github.com/open-atmos/PySDM/pull/1367/commits/292d4fb4e388029bda17e8563d4ce2709343523d, there is a seeding/no-seeding plot layout depicting what happens also without seeding
@slayoo I tried to write a unit test for the Seeding dynamic. It's here. https://github.com/claresinger/PySDM/blob/sd_injection/tests/unit_tests/dynamics/seeding/test_parcel_seeding_sanity_checks.py I'm not sure I have permissions to commit changes to this PR... or at least I didn't figure out how to do that, sorry!
@claresinger, thanks!
I'd vote for placing this type of test into smoke_tests rather than unit_tests - it checks much more than just the seeding dynamic "unit" (and it takes much more time than a unit test). Also, we could avoid copy-pasting code and run the hello_world notebook via the notebook_vars fixture and assert on its variables (like we did here: https://github.com/open-atmos/PySDM/blob/main/tests/smoke_tests/parcel_a/lowe_et_al_2019/test_fig_s2.py)?
To push to this PR's branch, you should be able to do:
git remote add slayoo [email protected]:slayoo/PySDM.git
git push slayoo sd_injection:sd_injection
great that the push worked, thanks! I'll try to reduce code duplication in the smoke test now
BTW, adding a new folder in smoke tests requires also adding an entry here: https://github.com/open-atmos/PySDM/blob/55e0cce99bd582df5aeb5136289bd94289b687c6/.github/workflows/tests%2Bartifacts%2Bpypi.yml#L128
Also, we could avoid copy-pasting code and run the hello_world notebook via the notebook_vars fixture and assert on its variables...
Looking close, I'd say we can better simplify the test to just use a Box env and a few timesteps, without invoking all the notebook machinery. But let me look at it tomorrow
Also, we could avoid copy-pasting code and run the hello_world notebook via the notebook_vars fixture and assert on its variables...
Looking close, I'd say we can better simplify the test to just use a Box env and a few timesteps, without invoking all the notebook machinery. But let me look at it tomorrow
here it is: https://github.com/open-atmos/PySDM/pull/1367/commits/d3318e68c6c405df145e6fe036c23ffada20a50d
smoke test with asserts on the values in the hello-world notebook (sd_count & rain mixing ratio) added ✓
So, at this point we are missing:
- [x] unit tests for sampling logic
- [ ] saner handling of pool of candidates for sampling (currently all the vector is reshuffled regardless of how many particles we inject)
- [x] more "physics tests", like Clare proposed: checking for reduced concentration (without collisions?), checking for change in sizes (without collisions?), would be great to come up with more (incl. collisions)
Nice to have, but likely OK to be done in another PR:
- [ ] multidimensional example (with position attribute sampled)
- [ ] example based on literature
- [ ] parallelisation of the sampling logic
- [ ] GPU support
Hi @slayoo - thanks so much for all the work here! Sorry for the delay - I've been away on a hiking holiday :) Things are looking great. I agree with your assessment of what we need to include in this PR vs. what should wait for future PRs.
For the unit tests of sampling logic, what do we mean by that? Do you mean something like checking if the seeded particle sizes/other attributes do well reproduce the specified distribution of sizes/other attributes? Or do you mean something even simpler, more "unit"-like?
@claresinger
For the unit tests of sampling logic, what do we mean by that? Do you mean something like checking if the seeded particle sizes/other attributes do well reproduce the specified distribution of sizes/other attributes? Or do you mean something even simpler, more "unit"-like?
The sampling logic is external to this PR (i.e. no files changed in PySDM/initialisation/sampling). The minimum we would need here is to assert on the actions performed by newly introduced injection logic, so essentially playing with various settings of the seeding method arguments:
-
multiplicity, -
extensive_attributes, -
seeded_particle_index, -
seeded_particle_multiplicity, -
seeded_particle_extensive_attributes, -
number_of_super_particles_to_inject
and asserting if the outcome is consistent with the argument setting.
Remaining items, to be handled in another PR, are documented in issue #1387
Pylint is passing. ✅ But the tests that are passing locally are failing from a RuntimeWarning. ❌
Pylint is passing. ✅ But the tests that are passing locally are failing from a RuntimeWarning. ❌
Thank you, @claresinger !
In the workflow, we invoke pytest with the -We option to treat warnings as errors
I think the problem is that discretise_multiplicities does not expect all values to be NaN, only some. But that excludes the case where there are no initial particles, only seed particles, like we are testing.
I think this is a valid case, so we should update the function and add a unit test for this case.
I think the problem is that
discretise_multiplicitiesdoes not expect all values to be NaN, only some. But that excludes the case where there are no initial particles, only seed particles, like we are testing.I think this is a valid case, so we should update the function and add a unit test for this case.
Good point! Thank you!
codecov reports that the only single line missing test coverage is now (particulator.py):
+ if n_null == 0:
+ raise ValueError(
+ "No available seeds to inject. Please provide particles with nan filled attributes."
+ )
And here's a pattern we could use to actually check what type of exception is thrown in case of errors and if the exception message matches some expected string: https://stackoverflow.com/a/77005620/1487910
codecov reports that the only single line missing test coverage is now (
particulator.py):+ if n_null == 0: + raise ValueError( + "No available seeds to inject. Please provide particles with nan filled attributes." + )
Does it make sense to include a case with both background and seed SDs in the backend test to ensure this coverage?
working on the tests now...
@claresinger, @jtbuch, we've made it!
Just ran the code and everything looks good to me!