MotionCLIP icon indicating copy to clipboard operation
MotionCLIP copied to clipboard

Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"

Results 20 MotionCLIP issues
Sort by recently updated
recently updated
newest added

question like this: ![image](https://user-images.githubusercontent.com/47704679/187472548-4675c4bb-5475-479b-b5cb-081026ab2a30.png) ![image](https://user-images.githubusercontent.com/47704679/187473132-43e88856-865e-4469-86bd-5807f7b0c6d2.png)

I test the `text_motion` script according to the readme, but the body in the render result is opposite. I don't modify any config and have no idea whether it is...

Hi, Are there any clues about the reason why the motions reproduced are static? I believe I followed the instructions very well. I also mentioned the issue here. https://github.com/GuyTevet/MotionCLIP/issues/5#issuecomment-1486213703 Shunlin

作者您好,请问您使用ffmpeg的版本是多少呢?

when running "python -m src.visualize.text2motion ./exps/paper-model/checkpoint_0100.pth.tar --input_file assets/paper_texts.txt" i get a that error. i just change. The problem is in src/datasets/dataset.py", line 371. I can run the script by assigning...

For a certain motion sequence, I encode it with the motion encoder ( function ‘encode_motions()' in visualize.py ) and get the clip features of this sequence. Then, I decode the...

Running `conda env create -f environment.yml` in command: conda --version: `4.13.0` ``` Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed ResolvePackageNotFound: - _openmp_mutex==4.5=1_gnu - lcms2==2.12=h3be6417_0 - lame==3.100=h7b6447c_0 -...

when I run python -m src.datasets.amass_parser --dataset_name amass, I get an error: RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 9932 but got size 52 for...

Was clip finetuned in training the checkpoint? I am unable to reproduce results with clip frozen (didnt try with it unfrozen yet).

![image](https://github.com/GuyTevet/MotionCLIP/assets/52557089/d568e958-de4b-472f-a7c6-c49f61a67f6b) In fig5, each motion has many actions of fragment. Example of fragment is as follows. ![image](https://github.com/GuyTevet/MotionCLIP/assets/52557089/0ddbb34a-f0fa-4344-a1fe-68a8fe2b6b48) How are the actions of this fragment extracted from the results?