dinov2
dinov2 copied to clipboard
Will the LVD-142M dataset or data processing codes be released?
Thanks for the outstanding work. Do you have the plan to release the LVD-142M dataset or codes for data processing?
Releasing LVD-142M is not something that we are considering releasing, I am afraid. Re: open-sourcing the data curation code, this could depend on feedback and interest from the community.
Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.
Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.
+1.
Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.
It is probably something simple like idealo/imagededup.
Edit: the procedure is applied to embeddings as shown on Figure 3 rather than images, so it is a bit more complex.
In Figure 3, you can see that duplicates include:
- an image which is identical to one of the curated dataset,
- another image which has a different ratio but is otherwise nearly identical to another one of the uncurated dataset.

Edit: You can find more information in the paper. It is a bit more complex as it is done on the embeddings. However, I believe a method using image hashes is also used in a first deduplication process, as "PCA hash" is mentioned. 🤔


Thanks for your reply.
@woctezuma Thank you for the reply.. I was able to follow the A3 section and get some deduplication results
@salonit77 hi :) could you share the code?
Using #56 instead to keep track of requests about data curation code. Also happy to provide clarifications on the procedure.
depend
Thanks for your reply! I have a question about the deduplication method SSCD mentioned in your paper. I would like to confirm that SSCD is only used to extract embedding? Then use the Faiss library to calculate the similarity between embeddings?
Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.
It is probably something simple like
idealo/imagededup. Edit: the procedure is applied to embeddings as shown on Figure 3 rather than images, so it is a bit more complex.In Figure 3, you can see that duplicates include:
- an image which is identical to one of the curated dataset,
- another image which has a different ratio but is otherwise nearly identical to another one of the uncurated dataset.
Edit: You can find more information in the paper. It is a bit more complex as it is done on the embeddings. However, I believe a method using image hashes is also used in a first deduplication process, as "PCA hash" is mentioned. 🤔
Thanks for your reply~ About implementation details,I still have some questions。If I have 5B images,but I only have few gpus,how can I do this work? Reading a large amount of data into memory simultaneously is also a serious problem. Do you have any good suggestions?
Releasing LVD-142M is not something that we are considering releasing, I am afraid. Re: open-sourcing the data curation code, this could depend on feedback and interest from the community.
So, the editor said something non-existing in OpenReview :D https://openreview.net/forum?id=a68SUt6zFt
soo,, release it!! 🥇
Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.
It is probably something simple like
idealo/imagededup. Edit: the procedure is applied to embeddings as shown on Figure 3 rather than images, so it is a bit more complex. In Figure 3, you can see that duplicates include:
- an image which is identical to one of the curated dataset,
- another image which has a different ratio but is otherwise nearly identical to another one of the uncurated dataset.
Edit: You can find more information in the paper. It is a bit more complex as it is done on the embeddings. However, I believe a method using image hashes is also used in a first deduplication process, as "PCA hash" is mentioned. 🤔
![]()
Thanks for your reply~ About implementation details,I still have some questions。If I have 5B images,but I only have few gpus,how can I do this work? Reading a large amount of data into memory simultaneously is also a serious problem. Do you have any good suggestions?
@yyyyyyfs (Not the author..just want to put my two cents:) Processing 5B images with a few gpus would take years. Personally I'd store images on GCS buckets and try to apply for their free TPU units to process it. My personal experience is that a cloud provider like GCP usually optimize the connection between storage and computing units very well so you don't have to worry about speed of loading images into memory. You may just treat it like an embarrassingly data parallel process.