ImagetNet Training Dataset Preprocessing
Hi, I am trying to train the dc-ae using the default setup (imagenet). I saw that imagenet here is being loaded as a npy file:
class LatentImageNetDataProvider(BaseDataProvider):
def __init__(self, cfg: LatentImageNetDataProviderConfig):
super().__init__(cfg)
self.cfg: LatentImageNetDataProviderConfig
def build_datasets(self) -> tuple[Dataset, Optional[Dataset], Optional[Dataset]]:
train_dataset = DatasetFolder(self.cfg.data_dir, np.load, [".npy"])
return train_dataset, None, None
I am wondering how is this file prepared and would it be possible to share a minimal working example of the file? Thank you!
Hi Stephen,
You can refer to the readme here to extract latent data. https://github.com/mit-han-lab/efficientvit/blob/5dd097d341a9cb2649733285d57e1efe6f35c0bd/applications/dc_ae/README.md?plain=1#L190
Hi @chenjy2003 thank you so much for the response. Would there be a easy way to fine-tune the autoencoder as well? I am thinking of training a dc-ae on the RUGD dataset, and im not sure if the pretrained autoencoder would work out of the box. Do you by any chance have any insights? Any pointers would be greatly appreciated! Thank you.
Hi Stephen,
We tried some images from the RUGD dataset and observed that our autoencoders worked well. Here are some examples. The left part is the original image and the right part is the reconstructed image. You can also use this script to test other images.
Thank you so much for getting back @chenjy2003! May I also know what would be the command for finetuning a DiT w/ the pretrained autoencoder, from the imagenet pretrained DiT presented in the paper? I think the readme has the command only for the uvit but not the DiT. Thank you!
That's a good point. @chenjy2003, we should add the command to train DiT-XL on ImageNet 512x512.