deep-motion-editing icon indicating copy to clipboard operation
deep-motion-editing copied to clipboard

Can't we do style transfer on two people with different skeletons?

Open sunbin1357 opened this issue 4 years ago • 7 comments

For Motion Style Transfer, I found you content_src and style_src in test demo is from a same person.
for example,

python style_transfer/test.py --content_src style_transfer/data/xia_test/sexy_01_000.bvh --style_src style_transfer/data/xia_test/depressed_18_000.bvh --output_dir style_transfer/demo_results/comp_3d_2

Can't we do style transfer on two people with different skeletons?

sunbin1357 avatar Nov 05 '20 02:11 sunbin1357

You are right. The two applications we implemented here (style transfer and motion retargeting) are currently independent. Motion style transfer requires that the skeleton of the source and the target animation will be similar. Combining the two applications into one system that can transfer styles when the skeleton structures are different is a great direction for future implementations, but we do not plan to release such a version in the near future.

kfiraberman avatar Nov 05 '20 19:11 kfiraberman

Is it possible to combine these two applications?What is the difficulty in combining these two applications? Are there any papers that combine the two applications? I hope you can provide some references if you have. Thank you very much.

sunbin1357 avatar Nov 06 '20 02:11 sunbin1357

is your paper(Learning character-agnostic motion for motion retargeting in 2D) a combination of style transfer and motion retargeting

sunbin1357 avatar Nov 06 '20 11:11 sunbin1357

"Learning character-agnostic motion for motion retargeting in 2D" is retargeting in 2D (only the first application). Ideally, you would have a system that decomposes animation into 3 parts: motion, skeleton, and style. I'm not aware of works that tackle these two problems within one system. I think that the main challenge here is to collect labeled data that contains different skeletons that perform motions in diverse styles. Working with two different datasets (one for retargeting and one for style) in one framework may be possible, but it's not trivial at all.

kfiraberman avatar Nov 11 '20 04:11 kfiraberman

I'm new to the field of motion retargeting and style transfer right now.

For your first application motion retargeting, I think of this task as retargeting from 3D to 3D, which is intra/inter structure motion retargeting. For "Learning character-agnostic motion for motion retargeting in 2D", I think of this task as retargeting from 2D to 2D. For Motion Style Transfer, can I think of this task as retargeting from 2D to 3D, which is intra-structure motion retargeting?

Furthermore, I want to do a study, which is to implement motion retargeting from 2D to 3D, which is inter-structure motion retargeting. In other words, I want to implement a motion imitation where our 3D skeleton could do similar motion by referring to a 2D video. Are there any related work available? Thank you very much!

sunbin1357 avatar Nov 13 '20 09:11 sunbin1357

I'm not aware of works that can directly retarget 2D motion from video to a given character. Section 5.2 if this paper tries to suggest such an application.

Generally speaking, with existing methods you could reconstruct 2D to 3D (paper), then retarget 3D to 3D.

kfiraberman avatar Nov 20 '20 04:11 kfiraberman

I'm not aware of works that can directly retarget 2D motion from video to a given character. Section 5.2 if this paper tries to suggest such an application.

Generally speaking, with existing methods you could reconstruct 2D to 3D (paper), then retarget 3D to 3D.

the simplest method that retarget 3D to 3D may be inverse kinematics (IK). What are the pros and cons of IK versus your proposed motion retargeting?

sunbin1357 avatar Nov 22 '20 14:11 sunbin1357