dinov2
dinov2 copied to clipboard
PyTorch code and models for the DINOv2 self-supervised learning method.
Related issues: - #6 - #14 - #46 - #97
Hi, In Table 9: Evaluation of frozen features on instance-level recognition. of the table, it shows the performance for OpenCLIP-G/14 is 50.7 for Oxford-M and 19.7 for Oxford-H. However, we...
Thank you for sharing Figure 1 from the paper, which showcases the mapping of features to RGB channels using PCA. I found it to be really impressive! I was wondering...
How to evaluate the model on image retrieval datasets such as Oxford-H? Thanks a lot!
- Fixed type errors found by the `mypy`. - Sorted imports with `isort`. - Removed unused args in the `_get_entries_path`, `_get_class_ids_path`, `_get_class_names_path` - Added missed requirements (`PLW`, which gets `numpy`,...
This is the function in the official demo. I want to write a piece of python code, then execute a script command, and pass in parameters such as the image...
Hi, I tried to reproduce the evaluation numbers in Table 4 and Table 6 of the paper. I downloaded the backbones and linear classifiers from the readme and composed the...
Thanks so much for this inspiring and excellent work! I implemented the patch matching but did not perform as well as the demo on the paper. Could you introduce more...
![image](https://github.com/facebookresearch/dinov2/assets/24391451/14eb4985-95d0-4c75-b635-8f059ba09c54) Do you still get this type of correspondence without iBoT? I assume patch-level features receive zero supervision signal without iBoT, is this correct?