fstroth
fstroth
@FraPochetti The dashboard lib already allows for evaluating a model on new (validation) data outside of training.
I think the large memory usage is due to this line: https://github.com/airctic/icevision/blob/b9d010d6fb1964d2725d3879b616b463627b20c7/icevision/models/mmdet/common/mask/prediction.py#L122 We should switch to using * REL* or *EncodedRLEs* which should save memory.
I think the proposal from Farid is good, I would just rename `metric_from_preds` to `evaluate_preds` so the following line `logs = inference_metric.metric_from_preds(preds, print_summary=True)` would change to `logs = inference_metric.evaluate_preds(preds, print_summary=True)`...
There is some user interest in this PR. What do we need to do to get this merged?
 is an alternative to GradCAM we might want to consider.
Yes. For a first implementation, I would like to mimic the interface of FastAI where we freeze the backbone for n epochs and then train both. Your version seems to...
If we want to go for maximum performance we should look into writing a version using numba.
Hey, I don't have much time at the moment, but I will have a look at it when I have time again.
@ai-fast-track @potipot We fixed a circular import that is currently in the lib. We should merge this asap, everything looks good to me. Can one of you have a second...