Matthias Minderer
Matthias Minderer
We have a [colab](https://colab.research.google.com/github/google-research/scenic/blob/main/scenic/projects/owl_vit/notebooks/OWL_ViT_Export_JAX_model_to_TensorFlow_SavedModel.ipynb) that shows how to convert the JAX model to a TensorFlow `tf.SavedModel`. I believe you can then convert the `tf.SavedModel` to TensorRT with something like this...
We're working on releasing the training code but cannot give a precise ETA yet. It will take at least a few more weeks. I'll keep you posted.
We just published the [training code](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#training). Please let us know if you have any questions.
We just added fine-tuning instructions to the [README](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#fine-tuning). The config is very similar to the from-scratch config. The only crucial difference is that instead of `config.init_from.codebase = 'clip'`, it uses...
Thanks Niels!
Hi Francesco, A docker file sounds like a great idea! Unfortunately, I have little experience with Docker. Would you be willing to draft a docker file based on the [quickstart...
Ah I saw that you provided this in #533, I'll test this.
Which model are you interested in?
This may be due to a jax version issue. Could you try again in a freshly installed environment and let me know how it goes? When I test it in...
@stevebottos thank you for sharing your ideas and code. In our experience, fine-tuning OWL-ViT without any modifications, end-to-end, works pretty well also in the closed-vocab and few-shot setting (10s to...