Use hypothesis to improve tests
Describe the behavior you would like added to mBuild
While our unit tests have good coverage in terms of number of lines of source code and basic usages, there has been very little exploration of the boundaries of what this package can handle. Adding hypothesis to some tests could improve the behavior of mbuild and push us to better validate the inputs to functions
Describe the solution you'd like https://hypothesis.readthedocs.io/en/latest/
Describe alternatives you've considered Beating my head against the wall when adding a small change breaks something elsewhere
Additional context
I'm going to take a stab at this.
I think I understand the basics of how hypothesis works. But I think it's worth discussing in the next meeting specific functionalities of mBuild we would like to have covered by property testing.
Due to SciPy 2020, I'm taking another look at this. I think we can greatly improving our testing of reading/writing using hypothesis and hope to have a minimum working example in a PR sometime today.
Here are some simple example I worked on this morning: https://gist.github.com/rmatsum836/871c707e55ae985616b026087525fed9
Currently, I'm not sure hypothesis would integrate with pytest. This is my biggest question moving forward.
@rmatsum836 pytest works fine with hypothesis. If you want to see some examples, I use hypothesis extensively in the coxeter library I maintain. There are a couple of subtle gotchas to be aware of (for example, relating to whether fixtures can be reused in multiple hypothesis runs, see this issue for background if you're curious) but all of that can generally be worked around and the resulting tests are very helpful IMO.
Guessing this never was able to be implemented. This is at the top of the wishlist though.