Visualization results
Hello, I found that the video results visualized using the method you provided are different from the demonstration, with facial expressions and posture movements in different regions. Can you tell me how to run the training and testing code
Visualization
- Yes, the given visualization script for BEAT is directly visualizing generated expressions and gestures via Face meshes and body skeletons in Blender.
- In our paper, we visualize the generated expressions and gestures of BEAT using MetaHuman in Unreal Engine 5. This is realized by retargeting the current BVH skeleton to MetaHuman skeleton for gesture animation, and feeding the expression parameters to ARKit module of MetaHuman for expression animation.
Train and test
- I have updated the README.md and added the commands for train and test. Please also reinstall the new environment.
- For dataset preprocessing, please refer to https://github.com/JeremyCJM/DiffSHEG/issues/1#issuecomment-2073909211 and original BEAT GitHub repo. We extract motion clips of SHOW data in a similar way as BEAT via lmdb.
Thank you for providing the training command. I encountered the following issues when using this command: No such file or directory: 'data/BEAT/beat_cache/beat_4english_15_141/weights/GesAxisAngle_Face_300.bin'
Is this because the dataset was not preprocessed, using the preprocessingipynb file provided in BEAT.
The "GesAxisAngle_Face_300.bin" is an autoencoder checkpoint for computing the Frechet Distance metrics. I will upload the autoencoder checkpoints later. You can temporally comment them out without computing Frechet Distance metrics during training.
I encountered a new issue using ae_100.bin in Camn. Traceback (most recent call last): File "/home/ydw/anaconda3/envs/Talko/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 74, in _wrap fn(i, *args) File "/media/ydw/sda/PycharmProjects/Diffsheg/runner.py", line 303, in main_worker train_dataset = import(f"datasets.{opt.dataset_name}", fromlist=["something"]).BeatDataset(opt, "train") File "/media/ydw/sda/PycharmProjects/Diffsheg/datasets/beat.py", line 116, in init self.aud_lmdb_env = lmdb.open(self.aud_feat_path, readonly=True, lock=False) lmdb.Error: data/BEAT/beat_cache/beat_4english_15_141/train/aud_feat_cache/hubert_large_ls960_ft: No such file or directory
This is the lmdb directory for precomputed HuBERT features. You can refer to the function below to create the HuBERT feature cache for training and testing audios. https://github.com/JeremyCJM/DiffSHEG/blob/3ebf3058f48cba3da9146afb7623e9ec1ab9e9a5/trainers/ddpm_beat_trainer.py#L1430
Thank you for this awesome work! I encountered the following issues when using this command: No such file: ges_axis_angle_300.bin I noticed that the metrics in the experimental results you provided differ from those in Beat. I am wondering if you used different MotionAutoencoder for testing. If so, could you kindly provide the autoencoder checkpoint file? I would greatly appreciate it.
Hi @lovemino and @yangdaowu , you can find all the autoencoder weights here: https://drive.google.com/file/d/1Wm2WMlacwStFaciCh7UlhQeyA3E2yEnj/view?usp=sharing . Note that autoencoders are only computing features for Frechet Distances.
Hi, could you please offer the code related to visualizing BEAT motion in UE?
我在 Camn 中使用 ae_100.bin 时遇到了一个新问题。回溯(最近一次调用):文件“/home/ydw/anaconda3/envs/Talko/lib/python3.8/site-packages/torch/multiprocessing/spawn.py”,第 74 行,_wrap fn(i, *args)文件“/media/ydw/sda/PycharmProjects/Diffsheg/runner.py”,第 303 行,main_worker train_dataset = import(f“datasets.{opt.dataset_name}“, fromlist=[”something“]) 中。BeatDataset(opt, “train”) 文件 “/media/ydw/sda/PycharmProjects/Diffsheg/datasets/beat.py”,第 116 行,init self.aud_lmdb_env = lmdb.open(self.aud_feat_path, readonly=True, lock=False) lmdb。错误:data/BEAT/beat_cache/beat_4english_15_141/train/aud_feat_cache/hubert_large_ls960_ft:没有这样的文件或目录
Hello, how did you finally solve this problem? I am very grateful.