NavigaTUM
NavigaTUM copied to clipboard
[Feature] Continious Quality Tests
Is your feature request related to a problem? Please describe. Currently, our project involves a lot of manual cloning, checking if everything works and letting checks run. This could be automated.
This issue is only concerned with the search result performance and quality tests.
Describe the solution you'd like
- a GitHub action which runs our quality tests against the localy current verson and the one we have deployed. The results should then be diffed.
- If found to be significantly slower/faster or better/worse scoring or have a different test data. The results should be visually hidden in the comment (via a details tag) and include the current highlighting.
- If possible, these should be diffed in a way that makes it obvious what has changed.
Describe alternatives you've considered Sticking with the current process
Additional context
- These Issues are not a gigantic priority, as they only represent small workflow improvement.
- We already have api-tests which test the api for adherence to the OpenApi schema. Some code from here might be useful.
- The API-Tests can be run by simply executing https://github.com/TUM-Dev/navigatum/blob/main/server/test/search_test.py. During the Run, a file is generated. This may be interesting for automating this step. If you know a better way to quality test our API, we are very open to suggestions. The current workflow is quite clunky.