poppy icon indicating copy to clipboard operation
poppy copied to clipboard

Further improve test coverage

Open mperrin opened this issue 9 years ago • 6 comments

This is just a catch all issue for places that need more coverage.

  • [ ] equivalence of inverse MFT with inverse FFT
  • [ ] Instrument class has pretty sparse coverage so far
  • [ ] None of the display code is tested at all; needs setup for headless testing without on screen drawing.
  • [ ] Bandlimited coron needs test coverage
  • [ ] MultiHexagonAperture needs test coverage
  • [ ] Zernike utility functions need test coverage

mperrin avatar Feb 12 '15 22:02 mperrin

  • [ ] add tests for offset_x, offset_y', androtation` options to AnalyticOpticalElements. See #7.

mperrin avatar Aug 18 '15 15:08 mperrin

Now we have coveralls working again so it's somewhat more user friendly to address this. https://coveralls.io/github/mperrin/poppy

The big offender on lacking coverage is the display code, which doesn't get exercised in the existing test suite basically at all. @josePhoenix do you have any experience in best practices for writing tests on matplotlib code? Can you think of anyone here we could chat with?

I'm not even sure what level of thoroughness we should target. It might be enough to just ensure all the plotting and display code runs without crashing on various test case inputs, without putting much/any effort into pixel-level evaluation of correctness of the outputs. Limited resources and this isn't our top priority but I'd like to at least take a first order pass at not totally neglecting about a quarter of the overall codebase.

mperrin avatar Sep 26 '16 17:09 mperrin

This gets back at what I brought up a while ago re: using the object-oriented API instead of PyPlot. Trying to test code that uses PyPlot means you have to understand how to query the PyPlot state machine... which I don't relish the thought of. Of course, pixel-wise comparisons of output plots would also work, and that is in fact how matplotlib tests itself. The PNG backend uses the same Agg library as the display ones, so we can be confident that correctly producing the PNG means users will see the Right Thing.

josePhoenix avatar Sep 26 '16 17:09 josePhoenix

Reference: http://aosabook.org/en/matplotlib.html#fig.matplotlib.regression

josePhoenix avatar Sep 26 '16 17:09 josePhoenix

I like the idea from that reference of putting together a few end-to-end tests, comparing the results to static pre-generated PNGs, and using a thresholded histogram of the results. I'm totally willing to buy their point that such an approach is more efficient than trying to write lots of little individual unit tests.

mperrin avatar Sep 26 '16 17:09 mperrin

There are some utilities in matplotlib.testing.compare that we could probably repurpose. They use nosetests instead of pytest, so I don't think we can grab their decorator as-is.

josePhoenix avatar Sep 26 '16 17:09 josePhoenix