MotionGPT icon indicating copy to clipboard operation
MotionGPT copied to clipboard

The official PyTorch implementation of the paper "MotionGPT: Finetuned LLMs are General-Purpose Motion Generators"

Results 13 MotionGPT issues
Sort by recently updated
recently updated
newest added

Hello, I didn't find the codes about the new benchmark in this paper: "Consequently, we calculate the Nearest Euclidean Distance for each key token relative to the corresponding ground truth...

@qiqiApink Thank you for sharing your impressive work.! In your paper, it is mentioned that the input can be composed of **an initial pose** + text to generate subsequent motions....

hi there!im trying to develop based on your fancy project,but i faced some questions i wannder to figure out the GPU requirements to run your model, i think the raw...

@qiqiApink Thanks for your work. I met the same issue like https://github.com/qiqiApink/MotionGPT/issues/7. After reading the discussions in the issue, I followed the instructions in https://github.com/qiqiApink/MotionGPT/issues/7#issuecomment-1697603576. However, the motion is strange....

Hi all, Thanks for your work. I'm encountering an issue when I attempt to run the demo: 1. **ValueError:** At line 114: `tokens = torch.tensor([int(token) for token in output.split(',')]).cuda()`, I...

fix a mistake in README @qiqiApink

hi, thanks for the great work. can you share the quantized llama model? thanks ofer

Hi Thanks for your work. I'm encountering an issue when I attempt to run eval.py eval.py line 112 model.load_state_dict(pretrained_checkpoint, strict=False) TypeError: Expected state_dict to be dict-like, got .

https://github.com/qiqiApink/MotionGPT/blob/main/generate_motion.py#L114 In this line, you run the following code: ``` tokens = torch.tensor([int(token) for token in output.split(',')]).cuda() ``` Does it mean you use the same vocabulary size as llama, and...

the t2m dataset has 22 joints while the kit has 21, thus the two datasets are trained seperately,and the example code in readme uses t2m dataset. does this mean only...