magic-animate
magic-animate copied to clipboard
Custom input data error
Hey, thanks for the release it's quite amazing having access to this for free ! Quick question tho, I'm having this error when using my own input image (works fine with the exemple data) any idea ?
torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [128, 3, 3, 3], expected input[1, 4, 512, 512] to have 3 channels, but got 4 channels instead
Thanks !
how did you generate DensePose video?
I use one of the example, I just customised the input image.
i have used this image it works. 512x512
installed via my auto installer on windows 10 python 3.10.11 - venv
https://github.com/magic-research/magic-animate/assets/19240467/c85ea92a-ae97-4b01-bd08-e77d7420e7ec
Converting it to jpg solved my issue
I want to take the densepose of a different video and transfer it. How do I get the densepose from the video? @FurkanGozukara
I want to take the densepose of a different video and transfer it. How do I get the densepose from the video? @FurkanGozukara
my script automatically doing it. if you want to manually do you need to figure out installing detectron, figure out how to use apply_net.py - it also need modification for accurate output and batch processing and also process every frame and combine back
compose_dense_pose
Inside compose_densepose first run install_detectron. bat
This file will install all libraries etc into a new VENV for to be able to generate DensePose video from given any raw video
Making this work took me like 15 hours. Make sure that you have followed all Pre Requirements
Then to generate a DensePose of any given raw video first edit make_dense_video. py file
Edit video_path according to your video path
Edit final_output_path according to your desire
You can also edit frames_folder and densepose_output_folder (optional)
Then to generate DensePose video run generate_dense_video. bat file
final_output_path will be DensePose video of given raw video fully automatically with best possible DensePose model
I have written a very special script and edited apply_net. py for batch processing. So this process is blazing fast
Before generating another DensePose video I suggest you to manually delete dense_frames and
I want to take the densepose of a different video and transfer it. How do I get the densepose from the video? @FurkanGozukara
use this space on huggingface: https://huggingface.co/spaces/fffiloni/video2densepose
Converting it to jpg solved my issue将其转换为jpg解决了我的问题
I have the same question, and my data is jpg file.