VoTT
VoTT copied to clipboard
Documentation for Using Custom Model (ssd mobilenet v2) Active Learning
Is your feature request related to a problem? Please describe. I'm having trouble using a retrained model based on the coco ssd mobilenet v2. The model was retrained on custom data using the Tensorflow object detection API (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md).
Describe the solution you'd like I'm not really sure where the breakdown is, so it would be very helpful to have brief instructions on how to go from a retrained frozen graph or saved model to the right .js model format needed for active learning in VoTT.
Thanks!
@matt-virgo did you get it working? I have used https://github.com/tensorflow/tfjs-converter to convert my keras model (not ssd, not sure if that can cause the problem) to tfjs format (same that is used in the project with ssd) and I am getting "error loading active learning model"
@CTA-Darek
me too.
Active learning is a great feature for this application. It should be great if there is supporting document. Especially how to convert the existing model to the right format.
I checked out the code. Its is not general enough to support different models so to do as little as possible in ts/js (and because we already have a webservice returning predictions from image) we are just sending the image to our webservice instead of passing it through the loaded active learning model and then inserting our response instead of active learning model output.
+1 Up ! May we have a documentation and an V2.x updated example for using custom Model with active learning :) ?
@CTA-Darek Care to elaborate, please. I'm trying to use a Kers-RetinaNet model for easing the burden of manual annotation. According to your answer, one way would be to "insert a response from model instead of the active learning feature" how to go about doing that?