granian icon indicating copy to clipboard operation
granian copied to clipboard

Enhance test suite

Open gi0baro opened this issue 3 years ago • 2 comments

gi0baro avatar Apr 16 '22 12:04 gi0baro

I'm interested in contributing to this issue. Could you provide some more details on what enhancements are needed for the test suite? Specifically:

  1. What areas lack coverage?
  2. Are you looking for more unit tests, integration tests, or both?
  3. Are there any known edge cases that need better testing?
  4. Any specific goals or metrics you're aiming for (e.g., code coverage percentage)?
  5. What would be your top priority for improvement?

Thanks!

robhudson avatar Jun 14 '24 18:06 robhudson

@robhudson thank you for your interest!

Regarding your questions (I will probably also update the description with few of the following points):

  • probably the major theme around tests is being sure every protocol has all the features tested, for example:
    • ASGI has a test for pathsend, but RSGI doesn't have any test for file responses
    • RSGI has quite a few tests on body iterators, ASGI and WSGI don't
  • covering encoding/decoding on all protocols with tests would also be nice (at least it would make issues like #325 immediately evident)
  • given Granian implements its own awaitable classes, it would be nice to have tests on those at least for edge cases like cancellations and exceptions
  • in terms of unit vs integrations tests: unit tests would be nicer, but at the same time that would mean implement them on the rust side of things probably, and also the way the code is structured today in Granian – expecting a lot of things in place, both on the web side like the request and the Python part like the applications – makes this quite hard to achieve (unless we refactor a lot of Rust code to expose additional interfaces); so I would say at the moment integration tests like the one already in place is the way to go
  • I'm personally not a big fan of coverage percentage, especially when that number is not backed by other guidelines or strategies; on the other hand coverage might help identify parts of the code which are not tested – and again, not with the aim to have all the lines tested, but just as a metric to eventually pay more attention to specific code parts – but I'm not sure how that should be achieved in a mixed code base like this one, where we have coexisting Rust and Python code. I'm open to any investigations in that regards.

Given all the above points, I don't have any specific top priority; I think my general feeling is that the investment should be made on tests preventing issues like #325 to happen in the first place, rather than increasing the general coverage.

gi0baro avatar Jun 17 '24 12:06 gi0baro