center-randomize icon indicating copy to clipboard operation
center-randomize copied to clipboard

Feature/test cases

Open samurato opened this issue 10 months ago • 2 comments

Summary

Added Test cases to validate the tests mentioned in https://github.com/moest-np/center-randomize/issues/4 .

  • एक विद्यालयको परिक्षार्थी संख्या हेरी सकभर १००, २०० भन्दा बढी परीक्षार्थी एकै केन्द्रमा नपर्ने गरी बाँढ्न पर्ने
  • आफ्नै विद्यालयमा केन्द्र पार्न नहुने
  • दुई विद्यालयका परीक्षार्थीको केन्द्र एक अर्कामा पर्न नहुने, अर्थात् कुनै विद्यालयका परीक्षार्थीको केन्द्र परेको विद्यालयका परीक्षार्थीहरूको केन्द्र अघिल्लो विद्यालयमा पार्न नहुने ।
  • एकै स्वामित्व / व्यवस्थापनको भनी पहिचान भएका केन्द्रमा पार्न नहुने
  • विगतमा कुनै विद्यालयको कुनै केन्द्रमा पार्दा समस्या देखिएकोमा केन्द्र नदोहोऱ्याउन नहुने
  • प्रत्येक पटक केन्द्र तोक्ने प्रोग्राम चलाउदा फरक फरक नतिजा आउने गरी ऱ्यान्डमाइज भएको हुनु पर्ने

Pull request type

Please try to limit your pull request to one type, and submit multiple pull requests if needed.

Please tick the type of change your PR introduces:

  • [ ] Bugfix
  • [X] Feature
  • [ ] Code style update (formatting, renaming)
  • [ ] Refactoring (no functional changes, no API changes)
  • [ ] Build related changes
  • [ ] Documentation content changes
  • [ ] Other (please describe):

What is the current behaviour?

Currently there is no Tests to validate/ accept that the feature submission is passing the acceptance criteria.

Please describe the behaviour that needs to be modified or linked to a relevant issue. This Pull request is adding a feature in utils to parse a process csv file and declared testcases to tackle the problem. Either pytest or simple python test_cases.py needs to be run to ensure that the new submissions are passing the acceptance criteria

Issue Number: #4

Please describe the changes or updates being made by this pull request.

  • Added custom TSV parsers class in utils folder to retrieve the informations from the input and results folder
  • Added unit tests to validate the results are as expected
  • tests need to be run multiple times as the results are non deterministic to get the best desired outcome of this program

Checklist

Please review and complete the following checklist before submitting your pull request:

  • [X] I have tested the changes locally and they work as intended.
  • [X] I have provided a detailed description of the changes made.
  • [X] I have reviewed the formatting and ensured it follows the project's style guidelines.
  • [X] I have assigned the appropriate labels to the pull request.
  • [X] I have added necessary documentation or updated existing documentation for the changes made.
  • [X] I have addressed code review feedback and resolved any conflicts with the main branch.

Other information

Include screenshots of the component before and after the change. As per the image the test is failing if any of the test cases outlined in the issue is failing

image

samurato avatar May 01 '24 15:05 samurato

@sapradhan Could you please kindly review my new updates ??

samurato avatar May 02 '24 03:05 samurato

looks good to me, much needed testing. is there a way to repeat test say 10-20 times, regenerate the output as well?

sapradhan avatar May 02 '24 19:05 sapradhan

Yes absolutely, in my fork I have integrated GitHub actions to run this test 10 times and produce output. Link Here:- https://github.com/samurato/center-randomize/actions

I am not sure if actions will run in the main repo if it will I will modify the actions and push it to the main repo.

samurato avatar May 03 '24 01:05 samurato

Yes absolutely, in my fork I have integrated GitHub actions to run this test 10 times and produce output. Link Here:- https://github.com/samurato/center-randomize/actions

I am not sure if actions will run in the main repo if it will I will modify the actions and push it to the main repo.

Probably will create a new issue to implement this later if this PR is merged

samurato avatar May 03 '24 11:05 samurato

a bit skeptical about test_undesired_cscode_scode_pair it fails with a version of prefs.tsv.

dont think the way get_scode_cscode_id disregards scode / cscode is the correct. at least problematic history is not bidirectional

skipping that test but merging as a lot of PRs are blocked because of lack of tests

sumanashrestha avatar May 04 '24 05:05 sumanashrestha

a bit skeptical about test_undesired_cscode_scode_pair it fails with a version of prefs.tsv.

dont think the way get_scode_cscode_id disregards scode / cscode is the correct. at least problematic history is not bidirectional

skipping that test but merging as a lot of PRs are blocked because of lack of tests

Understood. I will work on a more optimised code to tackle those test cases.

samurato avatar May 04 '24 09:05 samurato

a bit skeptical about test_undesired_cscode_scode_pair it fails with a version of prefs.tsv. dont think the way get_scode_cscode_id disregards scode / cscode is the correct. at least problematic history is not bidirectional skipping that test but merging as a lot of PRs are blocked because of lack of tests

Understood. I will work on a more optimised code to tackle those test cases. @sapradhan @sumanashrestha

In my previous commits I had added individual test cases to very for each less preferred allocation. For which reason was assumed as a unique type

But I was made aware that the reason column is an open text field. Hence I generalised the logic for any combinations of less preferred to be flagged as fail. Can we assume the reason column to be a classifier or have a seperate column as a reason type ?

image

In that way we can customise tests for each reason.

samurato avatar May 04 '24 13:05 samurato

the test should simply check for each row in output file if there is a preference score it should be > PREF_CUTOFF

sumanashrestha avatar May 05 '24 13:05 sumanashrestha