dinov2 icon indicating copy to clipboard operation
dinov2 copied to clipboard

Will the LVD-142M dataset or data processing codes be released?

Open XiaohuJoshua opened this issue 2 years ago • 5 comments

Thanks for the outstanding work. Do you have the plan to release the LVD-142M dataset or codes for data processing?

XiaohuJoshua avatar Apr 19 '23 08:04 XiaohuJoshua

Releasing LVD-142M is not something that we are considering releasing, I am afraid. Re: open-sourcing the data curation code, this could depend on feedback and interest from the community.

patricklabatut avatar Apr 19 '23 22:04 patricklabatut

Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.

salonit77 avatar Apr 19 '23 23:04 salonit77

Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.

+1.

XiaohuJoshua avatar Apr 20 '23 02:04 XiaohuJoshua

Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.

It is probably something simple like idealo/imagededup. Edit: the procedure is applied to embeddings as shown on Figure 3 rather than images, so it is a bit more complex.

In Figure 3, you can see that duplicates include:

  • an image which is identical to one of the curated dataset,
  • another image which has a different ratio but is otherwise nearly identical to another one of the uncurated dataset.

Dedpulication

Edit: You can find more information in the paper. It is a bit more complex as it is done on the embeddings. However, I believe a method using image hashes is also used in a first deduplication process, as "PCA hash" is mentioned. 🤔

Paper

Appendix

woctezuma avatar Apr 20 '23 07:04 woctezuma

Thanks for your reply.

XiaohuJoshua avatar Apr 20 '23 09:04 XiaohuJoshua

@woctezuma Thank you for the reply.. I was able to follow the A3 section and get some deduplication results

salonit77 avatar Apr 20 '23 18:04 salonit77

@salonit77 hi :) could you share the code?

Using #56 instead to keep track of requests about data curation code. Also happy to provide clarifications on the procedure.

patricklabatut avatar Apr 24 '23 22:04 patricklabatut

depend

Thanks for your reply! I have a question about the deduplication method SSCD mentioned in your paper. I would like to confirm that SSCD is only used to extract embedding? Then use the Faiss library to calculate the similarity between embeddings?

yyyyyyfs avatar Jun 28 '23 06:06 yyyyyyfs

Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.

It is probably something simple like idealo/imagededup. Edit: the procedure is applied to embeddings as shown on Figure 3 rather than images, so it is a bit more complex.

In Figure 3, you can see that duplicates include:

  • an image which is identical to one of the curated dataset,
  • another image which has a different ratio but is otherwise nearly identical to another one of the uncurated dataset.

Dedpulication

Edit: You can find more information in the paper. It is a bit more complex as it is done on the embeddings. However, I believe a method using image hashes is also used in a first deduplication process, as "PCA hash" is mentioned. 🤔

Paper

Appendix

Thanks for your reply~ About implementation details,I still have some questions。If I have 5B images,but I only have few gpus,how can I do this work? Reading a large amount of data into memory simultaneously is also a serious problem. Do you have any good suggestions?

yyyyyyfs avatar Jun 28 '23 07:06 yyyyyyfs

Releasing LVD-142M is not something that we are considering releasing, I am afraid. Re: open-sourcing the data curation code, this could depend on feedback and interest from the community.

So, the editor said something non-existing in OpenReview :D https://openreview.net/forum?id=a68SUt6zFt

forever208 avatar Feb 20 '24 01:02 forever208

soo,, release it!! 🥇

dvikdvik avatar Feb 21 '24 22:02 dvikdvik

Would it be possible to share additional details regarding the deduplication part used in the data curation pipeline. Or how that can be done for a custom dataset.

It is probably something simple like idealo/imagededup. Edit: the procedure is applied to embeddings as shown on Figure 3 rather than images, so it is a bit more complex. In Figure 3, you can see that duplicates include:

  • an image which is identical to one of the curated dataset,
  • another image which has a different ratio but is otherwise nearly identical to another one of the uncurated dataset.

Dedpulication Edit: You can find more information in the paper. It is a bit more complex as it is done on the embeddings. However, I believe a method using image hashes is also used in a first deduplication process, as "PCA hash" is mentioned. 🤔 Paper Appendix

Thanks for your reply~ About implementation details,I still have some questions。If I have 5B images,but I only have few gpus,how can I do this work? Reading a large amount of data into memory simultaneously is also a serious problem. Do you have any good suggestions?

@yyyyyyfs (Not the author..just want to put my two cents:) Processing 5B images with a few gpus would take years. Personally I'd store images on GCS buckets and try to apply for their free TPU units to process it. My personal experience is that a cloud provider like GCP usually optimize the connection between storage and computing units very well so you don't have to worry about speed of loading images into memory. You may just treat it like an embarrassingly data parallel process.

rfan-debug avatar May 31 '24 19:05 rfan-debug