UoM-WAM-Spam
UoM-WAM-Spam copied to clipboard
Calculate tentative mark from updated WAM
Like most, I check my WAM incessantly to be able to figure out how I might've performed on a subject, why not extend this to do it for you?
I was thinking of creating a simple pull request to implement this, but it looks like it might be best to implement it as an extension of the rich results feature.
In the meantime, I think I might write a simple bolt on to the master branch to achieve this. The only real problem I think might arise is the potential of more than one subject updating at the same time, and how to deal with accounting for that when there are multiple WAM updates.
I totally agree. This is something I also considered when I originally thought of making this script. And I think you are right that it belongs as part of the rich results feature. But, that might take me a while to get around to, so I think an extension to the current state of master branch would make sense in the mean-time.
I'm taking another look at rich-results
these days. I'll see if I get a chance to add this too!
Edit: Rich results feature is now in master
, and the messages.py
script will be the appropriate palce to perform hidden result inference.
I'm probably going to leave this for another semester or two at this point; all of my results are out and I suspect this is the case also for most users. But I just pushed some draft code on new branch guess-missing-results
at the bottom of messages.py
. Feel free to take a look and/or work on a PR, which I would be happy to review and work to include.
Example (with made-up results):
>>> results = {
... 'wam': '92.667',
... 'results': [
... {'mark': '90', 'credits': '12.5'},
... {'mark': '93', 'credits': '12.5'}
... ]
... }
>>> messages.resolve_wam(results)
average of published results differs from published wam
could be 12.5 more credit points with an average mark of 95.0
could be 50 more credit points with an average mark of 93.25
Now I think the biggest issue will be making the calculation logic robust to all the sorts of weird things that can go on: subjects wirth different amounts of credit points including as low as 3.125; rounding that takes place in the published WAM; etc.
I would be happy to eventually incldue this feature, even if it is experimental, if there is a configuration option to opt out and if there is sufficient indication to the user in the messaging that the calculations are just a rough guess at possible missing marks.
I'm opening this up for anyone to work on this exam season. I'll be happy to help review. Some suggestions:
- Start with the current state of the
missing-results
branch / draft implementation at the bottom ofmessages.py
. - The inference logic might make sense to separate as its own module.
- Think carefully about the different cases mentioned above, including rounding.
- Include configuration options to (a) opt-in to the feature and (b) tune the smallest credit point increment (default: 12.5) (c) anything else that might be useful to someone.