UoM-WAM-Spam icon indicating copy to clipboard operation
UoM-WAM-Spam copied to clipboard

Calculate tentative mark from updated WAM

Open tarantoj opened this issue 5 years ago • 5 comments

Like most, I check my WAM incessantly to be able to figure out how I might've performed on a subject, why not extend this to do it for you?

I was thinking of creating a simple pull request to implement this, but it looks like it might be best to implement it as an extension of the rich results feature.

In the meantime, I think I might write a simple bolt on to the master branch to achieve this. The only real problem I think might arise is the potential of more than one subject updating at the same time, and how to deal with accounting for that when there are multiple WAM updates.

tarantoj avatar Nov 05 '19 00:11 tarantoj

I totally agree. This is something I also considered when I originally thought of making this script. And I think you are right that it belongs as part of the rich results feature. But, that might take me a while to get around to, so I think an extension to the current state of master branch would make sense in the mean-time.

matomatical avatar Nov 17 '19 03:11 matomatical

I'm taking another look at rich-results these days. I'll see if I get a chance to add this too!

Edit: Rich results feature is now in master, and the messages.py script will be the appropriate palce to perform hidden result inference.

matomatical avatar Nov 24 '19 07:11 matomatical

I'm probably going to leave this for another semester or two at this point; all of my results are out and I suspect this is the case also for most users. But I just pushed some draft code on new branch guess-missing-results at the bottom of messages.py. Feel free to take a look and/or work on a PR, which I would be happy to review and work to include.

Example (with made-up results):

>>> results = {
...     'wam': '92.667',
...     'results': [
...         {'mark': '90', 'credits': '12.5'},
...         {'mark': '93', 'credits': '12.5'}
...     ]
... }
>>> messages.resolve_wam(results)
average of published results differs from published wam
could be 12.5 more credit points with an average mark of 95.0
could be 50 more credit points with an average mark of 93.25

matomatical avatar Dec 02 '19 04:12 matomatical

Now I think the biggest issue will be making the calculation logic robust to all the sorts of weird things that can go on: subjects wirth different amounts of credit points including as low as 3.125; rounding that takes place in the published WAM; etc.

I would be happy to eventually incldue this feature, even if it is experimental, if there is a configuration option to opt out and if there is sufficient indication to the user in the messaging that the calculations are just a rough guess at possible missing marks.

matomatical avatar Dec 02 '19 04:12 matomatical

I'm opening this up for anyone to work on this exam season. I'll be happy to help review. Some suggestions:

  1. Start with the current state of the missing-results branch / draft implementation at the bottom of messages.py.
  2. The inference logic might make sense to separate as its own module.
  3. Think carefully about the different cases mentioned above, including rounding.
  4. Include configuration options to (a) opt-in to the feature and (b) tune the smallest credit point increment (default: 12.5) (c) anything else that might be useful to someone.

matomatical avatar Jun 20 '20 03:06 matomatical