deep-motion-editing icon indicating copy to clipboard operation
deep-motion-editing copied to clipboard

An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Results 64 deep-motion-editing issues
Sort by recently updated
recently updated
newest added

@HalfSummer11 @kfiraberman Hello, thx for your excellent work. I have read your published paper, but I have some questions about foot skating. ![image](https://user-images.githubusercontent.com/22880160/138036528-97f74f5c-4538-47aa-8f44-52c7df9a1071.png) (1) It is said in the paper...

thank you for you great work! In the style transfer model, I noticed that the dataset for training is bvh format and json files can be used when testing. How...

How did you generate the stationary wave style transfer example seen at 2:30 of the youtube video? https://youtu.be/m04zuBSdGrc?t=150 I am able to run the demos and apply style transfer between...

在执行 `python datasets/preprocess.py`遇见的错误,我发现我的骨骼中对比原始数据少一根脊柱,这会有影响吗? ![2021-09-14 10-24-51屏幕截图](https://user-images.githubusercontent.com/83195382/133184342-2e01a123-babd-4e33-a9eb-06147a7cfa48.png) 能否给我一些相关的建议,我对这方面还不太懂

Applying style transfer examples, seems to reduce the sampling rate. For example, trying to apply a 'childlike' walking style to a neutral running animation to get a childlike running animation...

hi PeiZhuo, 感谢分享这么好的项目,我在用我们自己的运动数据进行运动重定向的时候遇到一个问题,想请教一下,我们的具体处理流程是这样的: 1. 将检测到的运动员的3d pose转为bvh文件 2. 用生成的bvh文件作为输入,按照您项目中提供的预训练模型进行预处理 + retargeting,结果和预期大相径庭,见下图(gt和output) ![image](https://user-images.githubusercontent.com/80259625/128639180-4a5d52e7-62eb-4c1d-af95-29611ba69e28.png) 图为输出和gt的bvh文件在blender中的可视化结果,其中红色是gt,黄色是output,当前姿态是一个运动员半弯腰准备跑动; 我将2d pose 转bvh以及对T-pose以及关节点的处理说明如下: 1)pose2bvh 使用的是这个项目:https://github.i.agoralab.co/dachr8/pose2bvh/tree/main/bvh_skeleton , 输入是3d position,输出时对应的bvh文件;需要说明的是,我们在这个过程中做了2个改动: a.使用右手坐标系而不是左手坐标系 (Y-up, Z-forward, X-right) b. 由于我们只检测到14个skeleton 关键点,进行插值得到和您的方法相同的23个关键点 2)我们生成的bvh文件和您的输入保持相同的拓扑结构,T-pose 如下图所示(blender中按照Z-up显示),包括bvh文件中保存关节点的顺序也一致 3)实验的时候,先让输入和输出拓扑相同,以便先走通流程...

Dear @PeizhuoLi , I have trained my customed action datasets. However, the retargeting result only has 16 frames. Could you please help me with how to improve this? ![image](https://user-images.githubusercontent.com/2667977/128455619-14f6aade-47f3-4bda-8baa-f6b8133543d9.png) Thanks...

您好,感谢开源这么棒的项目的代码! 我已经跑通训练以及测试的代码,但在与baseline方法的比较上有一点疑问。 我在论文Table.2 看到copy rotations的Intra-Structural误差是8.86,请问这里的global root position也是直接复制给目标骨骼的吗? 如果我对代码没有理解错的话,我把`compute_test_result()`中L303-L304替换成 `self.writer[dst][i].write_raw(gt_denorm[src][0, ...], 'quaternion', os.path.join(dst_path, '{}_{}.bvh'.format(self.id_test, src)))` 进行测试,但得到的结果是32.2,和论文中的结果相差很大。 请问是否是我对代码或者论文的理解存在问题? 期待您的答复。 https://github.com/DeepMotionEditing/deep-motion-editing/blob/19604abdc0ead66f8c82d9211b8c5862c6a68089/retargeting/models/architecture.py#L282-L307

我将human3.6m部分bvh数据集作为groupA , 然后您提供的Mixamo数据集中的***_m.bvh作为训练集中的groupB,训练结束后使用human3.6m中的另一部分作为测试集去测试,结果是正确的。但是我再使用项目[VideoTo3dPoseAndBvh](https://github.com/HW140701/VideoTo3dPoseAndBvh)将视频中提取出来的bvh (观察过该bvh结构和human3.6m的相同)作为输入时,retarget的结果就是错乱的动作,可否有可以提供的思路去解决的?

Hi there :) is there anyway to avoid or correct the foot slide of the retargeted characters seen in the video? - the red skeletons are unseen target characters -...