autowebcompat
autowebcompat copied to clipboard
Analysing false predictions
After trying different models, we should be able to analyze which screenshots were falsely classified. With this we might be able to find some pattern or see some models which were able to predict some screenshots right while the other model failed to do so. This information will be useful for the improvement of models.
Is this being worked on? I can implement a GUI to do this.
@Trion129 I am not working on this issue and nobody else has yet commented so you can take this up. What sort of GUI are you planning? It should just display the screenshots which were wrongly classified. And if we feel that the labeling was incorrect in training itself just remove it from the labels.csv
and other CSV's (and its bounding box line also if #128 is merged before this).
@marco-c can you review this issue before @Trion129 takes it up?
A GUI that shows the wrongly classified screenshots sounds fine to me. It should show the screenshots, their name, actual and predicted labels. Let's implement this first and then we can think about the removal from labels if we think the labeling is incorrect.
Or also just a CLI that tells you which screenshots were wrongly classified, then you can just open the screenshots for yourself.
I'll go with implementing GUI version, I have worked with pyqt5 library before. @sagarvijaygupta I see you use opencv in the PR #128 how is it compared to pyqt5? should I prefer using it to decrease dependencies for the scripts?
@Trion129 since not much things are involved in this GUI you can simply use Tkinter. Opencv should only be preferable if PR #128 is merged.
Yes, it would be preferable to reduce the number of dependencies. Use either Tkinter (no dependencies) or OpenCV (which will be a dependency after #128).