MARLIN
MARLIN copied to clipboard
[CVPR] MARLIN: Masked Autoencoder for facial video Representation LearnINg
Hello, Can you please share some sample output video, especially for wav2lip comparison? Thanks
Hi, Can you release model weights for the deepfake detection task? Thanks
Here is what I get in response to running the training code: `C:\Users\AR\Desktop\marlin\MARLIN>python train.py --config config/pretrain/marlin_vit_base.yaml --data_dir C:\Users\AR\Desktop\marlin\MARLIN\trainingData\YouTubeFaces --n_gpus 1 --num_workers 8 --batch_size 16 --epochs 2000 --official_pretrained C:\Users\AR\Desktop\marlin\MARLIN\videomae\checkpoint_vitb.pth _IncompatibleKeys(missing_keys=['encoder.pos_embedding.emb', 'decoder.pos_embedding.emb',...
Thanks for your wonderful work. I wonder if using weight provided by Video-MAE directly for weight initialization will lead to unfair comparison of pre-training tasks?
Thank you for sharing this wonderful works! I found a simple lightning dependancy and epoch parameter in the _cosine_scheduler_fn(...) in Marlin>model>marlin.py cause me an error. Somehow the epoch parameter above...
Is there any official source for this integration ? I have query bit not sure this is the right forum. @i-am-shreya @ControlNet As in this part of the paper `Lip...
@i-am-shreya @ControlNet, Can you'll release the weights and scripts for the deepfake detection finetuning
Hi I want to know how to use pre trained models to test my own image dataset