magic-animate
magic-animate copied to clipboard
how generate custom motion sequence?
how generate custom motion sequence?
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.
Is model 100% limited to 512*512
Or can it process like 768*768 both image and dense pose?
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready. Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended. Hope this can help.
Is model 100% limited to 512*512
Or can it process like 768*768 both image and dense pose?
We tried to infer using higher resolutions but the preservation ability for the reference image slightly decreased. You may try again, the results should be reasonable.
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready. Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended. Hope this can help.
Is model 100% limited to 512512 Or can it process like 768768 both image and dense pose?
We tried to infer using higher resolutions but the preservation ability for the reference image slightly decreased. You may try again, the results should be reasonable.
the major problem is generating DensePose video
it is freaking hard
could you help me? I have been struggling for like 6 hours here my thread
https://github.com/facebookresearch/detectron2/issues/5170
Hi, great work for the paper.
I am trying to generate denseposes with detectron2 as suggested and I noticed that the colors I get are not matching those of the sample inputs in this repo.
| What I get | What I would like to get |
|---|---|
Am I missing something, like a color scheme option for detectron2? I guess feeding my image to the controlnet will not produce optimal results, as the domain shift is quite significant.
EDIT: cmap=cv2.COLORMAP_VIRIDIS as input to DensePoseResultsFineSegmentationVisualizer's initializer solves this
i managed to generate like this
do colors need to match?
@FurkanGozukara I think your image can be improved by setting alpha=1.0 in the visualizer (it looks transparent and the violet background seems to leak through the pose)
The Japanese engineer, peisuke, created the google colab to generate a dense pose video.
https://colab.research.google.com/drive/1KjPpZun9EtlEMcFDEo93kFPqbL4ZmEOq?usp=sharing
The result is here.
https://x.com/peisuke/status/1732066240741671090?s=46&t=aBgVHjAMy0TFw0zYAE90WQ
The Japanese engineer, peisuke, created the google colab to generate a dense pose video.
https://colab.research.google.com/drive/1KjPpZun9EtlEMcFDEo93kFPqbL4ZmEOq?usp=sharing
The result is here.
https://x.com/peisuke/status/1732066240741671090?s=46&t=aBgVHjAMy0TFw0zYAE90WQ
damn i spent huge time for this :d
i am making local installer and video generator right now
@FurkanGozukara I think your image can be improved by setting alpha=1.0 in the visualizer (it looks transparent and the violet background seems to leak through the pose)
where to edit for this? in pose_maker.py file?
finally released full scripts including DensePose maker : https://github.com/magic-research/magic-animate/issues/44
I generated one for everyone, if you want to try :)
https://github.com/magic-research/magic-animate/assets/15265895/24ce8f65-5dd8-4f67-accc-e64867252293
you can extract a motion path for free here: pose.rip
Thank you for introducing me. I have uploaded the Colab code here. https://github.com/peisuke/MagicAnimateHandson
I generated one for everyone, if you want to try :)
police.fast.mp4
Hello, I want to know if this is an IUV map or an I map
Hi, great work for the paper.
I am trying to generate denseposes with detectron2 as suggested and I noticed that the colors I get are not matching those of the sample inputs in this repo.
What I get What I would like to get
![]()
Am I missing something, like a color scheme option for detectron2? I guess feeding my image to the controlnet will not produce optimal results, as the domain shift is quite significant.
EDIT:
cmap=cv2.COLORMAP_VIRIDISas input toDensePoseResultsFineSegmentationVisualizer's initializer solves this
Hello, I would like to ask if this image is saved directly or if the pkl file is first saved using the dump method before plotting.Thanks!
@BJQ123456 I am using the show DensePose command to print these images, not dump
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.
Hi! Is there any follow-up of rendering the DensePose semantic map from SMPL-X?
I don't know if someone is interested in this but I modified the original DensePose code to make it compilable and provided the compiled models here. You only need torch, torchvision and opencv to run this compiled model
I written a script and auto installer for this : https://www.patreon.com/posts/94098751
@dajes thank you for your nice work! :D
mark
mark
Am I missing something, like a color scheme option for detectron2? I guess feeding my image to the controlnet will not produce optimal results, as the domain shift is quite significant.