motion-diffusion-model
motion-diffusion-model copied to clipboard
The official PyTorch implementation of the paper "Human Motion Diffusion Model"
I'm trying to understand the design choices made in this code. Specifically, I'm wondering why condition C is passed through a random mask and then a linear layer, while time...
Thank you for the great work!!! I have a question regarding the choice of loss functions and data representation. Specifically, I noticed the use of a 263-D representation with L2...
Thanks for your great work. I wonder that how **--diffusion_steps 50** influence the training process. Looking forward your reply !!
Great piece! In the motion editing part, what does the input action and model kwrags include? My device has limited storage space and cannot obtain the complete HumanML3D data set....
Thanks for your great work. I am trying to recover SMPL mesh directly from the output. The output is a 263-length vector including 21 6D joint rotations. But when I...
Hello! First, I want to say thank you for your awesome product! I want to utilize your code in uncond mode for the specific purpose of human skeleton reconstruction. However,...
Thanks for your great work! I noticed that you use different Mean and Std when training and evaluation. It seems that t2m_mean.py and t2m_std.npy are the mean and std of...
File "/home/bixueting/anaconda3/envs/mdm/lib/python3.7/site-packages/torch/optim/optimizer.py", line 89, in wrapper return func(*args, **kwargs) File "/home/bixueting/anaconda3/envs/mdm/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/bixueting/anaconda3/envs/mdm/lib/python3.7/site-packages/torch/optim/lbfgs.py", line 311, in step orig_loss = closure() File "/home/bixueting/anaconda3/envs/mdm/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line...
Is there a way to generate body mesh based on model output at real-time? Isn't the model output compatible with SMPL paramters and we just need a formula like `Vertices_new...
Hi! After replacing an eight-layer Transformer encoder with Mamba, the training loss fails to decrease. Could it be that Mamba doesn't perform as effectively as the Transformer in the diffusion...