EAMM
EAMM copied to clipboard
Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'
!python run.py --config /content/drive/MyDrive/EAMM/config/train_part2.yaml --mode train_part1 --checkpoint log/124_52000.pth.tar /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.) return _VF.meshgrid(tensors,...
Thanks for your contribution! I wonder where should I download the datasets for training and orginize them? It seems that there is no data in the link you provide.
Thank you for the great work. I wanted to ask regarding the parameter "--pose_given" that is used in demo.py file, how do I get this for an image?
Hi! Thank you for sharing your code. When I run the test code, the right eye is much larger than the left eye in the result. Is that correct or...
I used your pre-trained model, but the results are very strange. Do you happen to know why this is?
Hello, I found a few errors in train.py: 1. Traceback (most recent call last): File "/fsx/nikitadrobyshev/EAMM/run.py", line 121, in train_part1(config, generator, discriminator, kp_detector, kp_detector_a,audio_feature, opt.checkpoint, opt.audio_checkpoint, log_dir, dataset, test_dataset,opt.device_ids, name)...
Hello, I found a few errors in run.py file: 1. You have an extra space in from of train_part2 function on the last line of run.py 2. ImportError: cannot import...
Hello! Thank you for sharing the code. Could you please also share dataset process? I want to train your model from scratch. Thank you