Hongwei Han
Hongwei Han
https://github.com/GuyTevet/motion-diffusion-model/blob/8139dda55d90a58aa5a257ebf159b2ecfb78c632/model/mdm.py#L151C8-L151C8 ``` class MDM(nn.Module): ...... def forward(self, x, timesteps, y=None): """ x: [batch_size, njoints, nfeats, max_frames], denoted x_t in the paper timesteps: [batch_size] (int) """ bs, njoints, nfeats, nframes =...
@GuyTevet I've done a little bit change to the code to cache text embedding in inference
https://github.com/qiqiApink/MotionGPT/blob/main/generate_motion.py#L114 In this line, you run the following code: ``` tokens = torch.tensor([int(token) for token in output.split(',')]).cuda() ``` Does it mean you use the same vocabulary size as llama, and...