coralnet
coralnet copied to clipboard
Robot status or event log displayed on Backend page
Possible status messages include 'pending training', 'new robot trained, improved by 5%', 'reclassifying 50 images'.
At minimum this could be a single status message. Or we can have an event log with dates for each event.
Just want to make sure this is mentioned somewhere: We're often asked why no new robot has been trained for a particular source in a while, even after annotating 1.1x more images. Usually, the reason is that a robot WAS trained, but wasn't saved due to a lack of accuracy improvement over the previous robot. We should start communicating this type of event to the source owners somehow. One way could be using this event log.
Yeah. That was the whole idea behind setting that up. I just never ended up adding any event messages. :(
On Thu, Jul 9, 2020 at 17:09 StephenChan [email protected] wrote:
Just want to make sure this is mentioned somewhere: We're often asked why no new robot has been trained for a particular source in a while, even after annotating 1.1x more images. Usually, the reason is that a robot WAS trained, but wasn't saved due to a lack of accuracy improvement over the previous robot. We should start communicating this type of event to the source owners somehow. One way could be using this event log.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/beijbom/coralnet/issues/159#issuecomment-656410279, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAITTFZ35VJQCQLYXIBXQODR2ZL25ANCNFSM4FDA4MRA .
We'll just make this a higher-priority Beta 3 task, I suppose.
Actually, there's something else I realized regarding clarity of the backend info. If you check out this source: https://coralnet.ucsd.edu/source/1656/
- The only valid classifier has 1653 images. There are currently 8049 images in the source.
- The valid classifier is shown as having 70% accuracy, but when we try to train a new classifier, it says
pc_accs
for the valid classifier is 85%. So the new images (or maybe changed annotations) really boosted the accuracy of the valid classifier. - The confusion matrix shows n = 10144. That's based on 1653 images, since 1653 images / 8 * 50 points per image = around 10k.
So, perhaps the valid classifier is indeed still the best one, but the stats shown (overall accuracy and CM) are the stats from the dataset at the time of original training. Is this by design, or is it something that should be changed? (Or did any of this change in beta2-rollout?)
hey @StephenChan . You bring up a good point. I suppose I wasn't expecting several robots in a row to fail. And perhaps users will find it weird that their robot's performance suddenly changes without a new one being added? But in principal I agree that what you describe is the right way to go. We'd probably have to add one of those event messages at the same time to communicate what happened.