efficientvit icon indicating copy to clipboard operation
efficientvit copied to clipboard

ImagetNet Training Dataset Preprocessing

Open StephenYangjz opened this issue 1 year ago • 6 comments

Hi, I am trying to train the dc-ae using the default setup (imagenet). I saw that imagenet here is being loaded as a npy file:


class LatentImageNetDataProvider(BaseDataProvider):
    def __init__(self, cfg: LatentImageNetDataProviderConfig):
        super().__init__(cfg)
        self.cfg: LatentImageNetDataProviderConfig

    def build_datasets(self) -> tuple[Dataset, Optional[Dataset], Optional[Dataset]]:
        train_dataset = DatasetFolder(self.cfg.data_dir, np.load, [".npy"])
        return train_dataset, None, None

I am wondering how is this file prepared and would it be possible to share a minimal working example of the file? Thank you!

StephenYangjz avatar Dec 01 '24 22:12 StephenYangjz

Hi Stephen,

You can refer to the readme here to extract latent data. https://github.com/mit-han-lab/efficientvit/blob/5dd097d341a9cb2649733285d57e1efe6f35c0bd/applications/dc_ae/README.md?plain=1#L190

chenjy2003 avatar Dec 02 '24 00:12 chenjy2003

Hi @chenjy2003 thank you so much for the response. Would there be a easy way to fine-tune the autoencoder as well? I am thinking of training a dc-ae on the RUGD dataset, and im not sure if the pretrained autoencoder would work out of the box. Do you by any chance have any insights? Any pointers would be greatly appreciated! Thank you.

StephenYangjz avatar Dec 02 '24 02:12 StephenYangjz

Hi Stephen,

We tried some images from the RUGD dataset and observed that our autoencoders worked well. Here are some examples. The left part is the original image and the right part is the reconstructed image. You can also use this script to test other images.

creek_00001 park-1_00001 trail_00001 village_00003

chenjy2003 avatar Dec 03 '24 14:12 chenjy2003

Thank you so much for getting back @chenjy2003! May I also know what would be the command for finetuning a DiT w/ the pretrained autoencoder, from the imagenet pretrained DiT presented in the paper? I think the readme has the command only for the uvit but not the DiT. Thank you!

StephenYangjz avatar Dec 04 '24 18:12 StephenYangjz

That's a good point. @chenjy2003, we should add the command to train DiT-XL on ImageNet 512x512.

han-cai avatar Dec 04 '24 18:12 han-cai

@StephenYangjz Thanks for your suggestion.

The training command for DiT-XL on ImageNet 512x512 is added here and here.

If you want to finetune from the imagenet pretrained checkpoint, you can add dit.pretrained_path=... to the command.

chenjy2003 avatar Dec 05 '24 05:12 chenjy2003