VAE - Training
I'm having a hard time recreating your results. I'm trying to retrain the VAE from scratch: LR 1e-4, Adam, 512 embedding size. The validation error seems to be leveling off, and I don't think brute forcing to 150K epochs would solve this issue.
Would it be possible to share your loss function curves?
Need to turn debug mode off
I meet the same problem. Could you please share the correct loss curve?
My problem ended up being that there is a debug flag which the author's used for iterating quickly. The debug mode only loads ~100 examples. Also, the batch size needs to be increased to 256 for VAE. Make sure you are using stage1 config.
In the dataloader, there is also a caching function which checks for the existence of a ./tmp folder and loads a .pkl file instead of loading in the entire dataset again.
Thanks! It really helps! But my commit loss looks strange. Does yours look the same?
Yes
@palmex @aixiaodewugege I am so sorry about your troubles caused by "this debug flag". If you have any other questions, feel free to ask. I'm more than happy to help.
@palmex @aixiaodewugege I am so sorry about your troubles caused by "this debug flag". If you have any other questions, feel free to ask. I'm more than happy to help.
"Hello, I'm a newcomer to VQ-VAEs and am currently working on training a face motion VQ-VAE using your code. However, I've noticed that the validation loss curve appears to be indicating overfitting. Could you please offer some advice to address this issue?"
@aixiaodewugege Based on the loss-feature/train and loss-feature/val curves, there is clear overfitting. The commit loss reflects the efficiency of the codebook, which seems somewhat correct.
One question could help: how does this feature differ from the parameters of FLAME(I guess flame indicates the parameters)? A general solution to address this problem is to apply data augmentation and use dropout with masking on both input features and loss during training.
@ChenFengYe Thank you for your reply! It was really helpful. The FLAME model represents the vertex loss. After adding dropout, the results look better, but I still notice a gap between the training and validation loss. How can I minimize this gap?
Additionally, could you clarify what you mean by data augmentation?
@aixiaodewugege Data augmentation could help to minimize this gap. The specific process could be, for example, RTS (rotation, translation, scale) on training data (like vertex, XYZ). You can augment your data by following your target tasks.