Christian Federmann
Christian Federmann
We need to be able to create actual bad refs, similar to what Yvette's code did for WMT17. Might be easy to simply integrate her scripts?
We need a better way to view progress for campaigns. Based on required annotations per system, it should be fairly straightforward to render a convergence graph for a given campaign.
It should be possible from the Django admin backend to retire a campaign. This should also retire any associated objects such as tasks, items, and maybe results. The corresponding campaign...
If input data is exactly identical (= same JSON, same IDs, same campaign) then we should not create redundant task instances. This only pollutes the database.
We have seen spaces, non-ASCII characters and symbols resulting in server errors during signup POST submission. Fix this and give better error message.
Compute inter-annotator agreement (IAA) for the number of annotators _or coders_ (C) which maximises the number of items (I) that have been evaluated by the respective sub set of coders....
Instead of having various file level scripts for shell admin ops, add custom management commands and clean up things... See Django documentation here: - https://docs.djangoproject.com/en/dev/howto/custom-management-commands/
Use random sample of 10 HITs per WMT14 language pair and allow infinite collection of annotation results on these... Create DEMO group for demo users.
Add code to properly randomize `SECREY_KEY` inside `settings.py` as likely none of the users would otherwise do it.