CCL
CCL copied to clipboard
Gaps in benchmark validation for existing features
We've had a lot of new features added rapidly (yay!), which has resulted in being left with some gaps in validation of some these features with benchmarks in our test suite (less yay). Before we put out paper 2 it would be great to add benchmarks where possible to fill these gaps and catch any dormant accuracy issues.
Here is a list of the functions / classes / methods which currently don't seem to have benchmark tests associated. I got this by running pytest with the pytest_cov plugin and going through the report produced but I could have made a mistake so if you know there is a benchmark test which covers one of these please say.
Power.py:
- [ ] sigmaV
nl_pt/tracers.py:
- [ ] translate_IA_norm
nl_pt/power.py
- [ ] get_pgg - sub_lowk case
- [ ] get_pt_pk2d: only case which is called is IAxIA, other cases not called
neutrinos.py:
- [ ] Omeganuh2
halos/profiles.py:
- [ ] HaloProfile.real - case where a Fourier space case is defined and FFTed to get real space
- [ ] HaloProfile.fourier - case where a real space case is defined and FFTed to get fourier space
- [ ] HaloProfile.projected - case where we have rho(k) and directly compute Sigma(R)
- [ ] HaloProfile.cumul2d
- [ ] HaloProfile._fftlog_wrap
- [ ] HaloProfile._projected_fftlog_wrap
- [ ] HaloProfileGaussian
- [ ] HaloProfilePowerLaw
- [ ] HaloProfileNFW - truncated case, cumul2d functionality, non-truncated case for analytic fourier transform
- [ ] HaloProfileEinasto - truncated case
- [ ] HaloProfileHernquist - truncated case
- [ ] HaloProfileHOD - _usat_real, real
halos/hmfunc.py:
- [ ] MassFuncDespali15 - _get_fsigma, ellipsoidal case
- [ ] MassfuncTinker2010
- [ ] MassFuncBocquet16 - a bunch of cases
- [ ] MassFuncWatson13 - several cases
halos/hbias.py:
- [ ] HaloBiasSheth99 - get_bsigma, in case of use_deltac_fit
halos/halo_model.py:
- [ ] HMCalculator - I_0_1, halomod_mean_profile_1pt, halofmod_bias_1pt
correlation.py
- [ ] correlation_multipole()
- [ ] correlation_3d_rsd()
- [ ] correlation_pi_sigma()
boltzmann.py:
- [ ] get_isitgr_pk_lin(): not testing case with massive neutrinos
background.py:
- [ ] angular_diameter_distance()
- [ ] luminosity_distance()
- [ ] Sig_MG, mu_MG
tracers.py:
- [ ] ISW
- [ ] Magnification
So in summary, we could use some additional benchmarks mostly for our new and improved halo functionality, largely for the halo profile definitions and a few other things, as well as a few other bits and bobs. Maybe there are some public codes or private code by DESC members that can plug this gap for halo functionality.
@damonge @pennalima @vitenti tagging you for any thoughts on where we might find benchmarks for the halo functionality?
Hi @c-d-leonard , I can extend the existing notebook where we compare CCL, NumCosmo and Colossus (regarding the halo profiles), and also create new jupyter notebboks to test these functions. What do you think? @damonge @vitenti
@pennalima that would be great - what halo profile capabilities does NumCosmo have? I am not super familiar with Colossus is this an equivalent code? Sorry, I should know that.
No problem @c-d-leonard . :)
CCL, NumCosmo and Colossus implements the 3D densities, the projected mass density and the excess suface mass density (CCL implements the cumulative). All three implement NFW, Einasto and Hernquist. Colossus also has the DK14 profile.
Okay, that's great to know, thanks @pennalima! From what I can see we do already seem to have benchmark comparisons for most of the standard iterations of these cases, just some edge cases (e.g. when we are truncating the profiles) that are not covered. So I don't want to ask you to do a lot of work if we already have most of what we need. @damonge can you confirm I've correctly understood what's missing here?
@damonge bumping the above question - have I understood right what's missing in benchmarks from the halo profile stuff?
Hi @c-d-leonard , as we discussed today, I would like to create a list of the cross-check/benchmarks we need to do (at least those for CCL paper 2). Should we list it here?
Sorry for the delay on this @pennalima . This is a good place to do make a list, yep. Are you interested in only discussing the benchmarks we are missing in the realm of halo profile, or more generally? I do have some specifics in mind but it would be good if you could confirm the scope of what you want to have in the list first.
Did I understand correctly from the call that you have an undergraduate student who is able to help produce some benchmarks for us from NumCosmo and that's why you're wondering? (If so this is great.)
Hi @c-d-leonard , sorry for my delay now. In principle, I would like to cross-check all CCL functionalities with NumCosmo (and other libraries). So we can create a general list. After the last DESC meeting, you and others suggest creating a project concerning this. I will do it, like this it will be easier to have other people involved.
Soon I will know if I will have an undergrad student on this. :)
@pennalima Okay, thanks. So, would it be correct to say that what you are looking for is basically the same list as above in the first post of this issue, but instead of listing specific function names only, giving more information on the physical quantities to be checked in each case?
For example, instead of:
- [ ] HaloProfile.real - case where a Fourier space case is defined and FFTed to get real space
you would want:
- [ ] Transform input arbitrary Fourier-space halo profile rho(k) -> rho(r). CCL method:
HaloProfile.real
Is that right? If yes I can make a version of the list above with this more physical information.
Hi @c-d-leonard , that would be great.