Wu Yiming

Results 17 comments of Wu Yiming

> Hi @yiyixuxu, sorry for the late reply. I am not actively working on this but I think it makes sense to keep this PR around in case an open...

Same issue. Is there anybody who has the images backup on google driver or anywhere else?

About m2t task, the testing process is stuck at the 4th replication since the SIGTERM signal. Similar to t2m, the testing result is behind the results reported in the paper....

> @weleen hi! Has this issue been resolved? We met the same issue. @LinghaoChan I think there are some mistakes in get_motion_embeddings. In m2t.py https://github.com/OpenMotionLab/MotionGPT/blob/0499f16df4ddde44dfd72a7cbd7bd615af1b1a94/mGPT/metrics/m2t.py#L325-L329 In t2m.py https://github.com/OpenMotionLab/MotionGPT/blob/0499f16df4ddde44dfd72a7cbd7bd615af1b1a94/mGPT/metrics/t2m.py#L251-L254 m_lens are...

> Definitely possible! I worry there's a fairly narrow band of usefulness for a TAESVD, though, since for cheap previews you can run TAESD per-frame and for max quality you...

hi, @Peanut736 If you find that some ids are not in your downloaded MARS dataset, I think this is caused by the difference of MARS dataset. My suggest is using...

@ppriyank you could download the pretrained models from [BaiduDisk](https://pan.baidu.com/s/1RduGEbq-tmfLAHM0k3xa4A)(code: luxr)

@habibian Hi, have you obtained results similar to the released AnimateLCM-svd-xt? I fine-tuned the Spatial Transformer layer for 30k iterations, the results appear as blurry as what you've shown above....

trainable parameters are set as follows: ```python unet.requires_grad_(False) parameters_list = [] # Customize the parameters that need to be trained; if necessary, you can uncomment them yourself. for name, para...