MotionGPT
MotionGPT copied to clipboard
Why not update the vocabulary size of llama?
https://github.com/qiqiApink/MotionGPT/blob/main/generate_motion.py#L114 In this line, you run the following code:
tokens = torch.tensor([int(token) for token in output.split(',')]).cuda()
Does it mean you use the same vocabulary size as llama, and output the motion tokens by outputing the number string separated by comma? If so, why not increase the vocabulary size?
I have the same question. Did you figure it out?