i.eb.netrad: add test file
This PR adds a regression test suite for the i.eb.netrad module, which calculates net radiation using energy balance principles. The tests verify the module's correctness and stability across different input conditions.
Tests Included
test_netrad_output_validity: It verifies basic functionality by checking if the module produces expected raster output with valid statistics. This ensures the core algorithm produces correct results within expected ranges.test_rn_decreases_with_higher_albedo: The test confirms the inverse relationship between albedo and net radiation and validates the physical principle that higher reflectivity (albedo) should result in lower net radiationtest_rn_increases_with_lower_zenith_angle: It tests the positive correlation between solar elevation and net radiation, ensuring the module correctly accounts for solar geometry in its calculationstest_monotonic_gradient_trend: The test verifies the expected negative correlation between surface temperature and net radiation. This confirms proper handling of thermal radiation components in the energy balance.
Note
During a refactoring effort, a helper method intended to abstract r.mapcalc functionality was developed. Small but consistent numerical differences were observed between direct r.mapcalc calls and a refactored helper. To ensure consistency with original results and avoid flakiness, the tests retain explicit r.mapcalc calls rather than a refactored pattern. This numerical sensitivity may be related to the ongoing stability investigation in i.smap module.
Looking forward to feedback and potential refinements.
During a refactoring effort, a helper method intended to abstract r.mapcalc functionality was developed. Small but consistent numerical differences were observed between direct
r.mapcalccalls and a refactored helper. To ensure consistency with original results and avoid flakiness, the tests retain explicitr.mapcalccalls rather than a refactored pattern. This numerical sensitivity may be related to the ongoing stability investigation ini.smapmodule.
Could you elaborate on this?
I had the following setUp originally,
@classmethod def setUpClass(cls): cls.use_temp_region() cls.runModule("g.region", ...) # Repeated r.mapcalc calls for the rasters cls.tmp_rasters.extend([...])
After refactoring, I had this skeleton:
@classmethod def create_raster(cls, name, expression): cls.runModule("r.mapcalc", ...) cls.tmp_rasters.append(name) @classmethod def setUpClass(cls): cls.use_temp_region() cls.runModule("g.region", ...) # Single-line calls via helper for the rasters cls.create_raster(...)
When comparing r.univar results between both approaches for output rasters, I noticed following deviations:
| Original | Refactored | |
|---|---|---|
| max | 472.95246 | 473.32688 |
| mean | 384.14907 | 384.46406 |
| min | 295.21037 | 295.46524 |
| stddev | 40.09836 | 40.12491 |
Would this be related to the better sums of #5750, where the snapshots here were taken before with some numerical errors?
Yes, that’s very likely. I had not pulled the latest changes in my local machine until today, and since #5750 was merged the week before, it is possible the small differences I observed are due to that change. The earlier test snapshots were generated before this improvement, I will regenerate them using the latest code to confirm.
I have tested it after pulling the latest changes, and it is working fine with both the previous and updated versions. Confirming that the improvements from #5750 likely resolved it. Do I make the changes here?
I doubt it was the r.univar change, your changes are bigger than the changes in the tests in #5750. But I would like to verify this, you can compile the old version and test. If the changes are indeed caused by r.univar, I am somewhat concerned.
I went ahead and checked by compiling the version just before #5750, and interestingly, the tests still passed without any deviations. It is possible the discrepancies were introduced due to something in my local setup, maybe a cached build.
And how many times did you "test" it? (To know if it's a one-off situation or one in twenty).
Having a clean build (versus incremental build), especially when the libraries changed between commits tried is essential to make a definitive conclusion.
When I initially noticed the numerical differences, I ran the test about 4–5 times using the refactored code. After observing consistent deviations, I reverted to the non-refactored version.
More recently, when testing with the version just before #5750, I ran the tests 2–3 times and didn't see any issues. That said, I had not done a fully clean rebuild at the time, so I can recompile cleanly and rerun both versions a few more times to be sure before drawing any firm conclusions.
I went ahead and checked by compiling the version just before #5750, and interestingly, the tests still passed without any deviations. It is possible the discrepancies were introduced due to something in my local setup, maybe a cached build.
Thanks, that's good news.