ControlNet
ControlNet copied to clipboard
What is Controlnet-M2M in XYZ scripts? We can upload a video there, but how does it work?
What is Controlnet-M2M in XYZ scripts? Where can we find a description?
You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
why did you drag them one by one rather than using batch which is No.6 button behind img2img
You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
why did you drag them one by one rather than using batch which is No.6 button behind img2img
because batches only change the files in img2img, I need to change the files in ControlNet, every frame using different png to generate a openpose pose.
You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
So I put an mp4 file in ControlNet-M2M, then (for example) I set in controlnet1: Enable, Preprocessor: Depth, Model: Depth. And it should record a bunch of pngs with depthmaps from this video for me?
You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
So I put an mp4 file in ControlNet-M2M, then (for example) I set in controlnet1: Enable, Preprocessor: Depth, Model: Depth. And it should record a bunch of pngs with depthmaps from this video for me?
yes, it will extract 1 frame from video each time, process the depth info, then use it to generate the pic, then the next frame. but not keep them.
You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
So I put an mp4 file in ControlNet-M2M, then (for example) I set in controlnet1: Enable, Preprocessor: Depth, Model: Depth. And it should record a bunch of pngs with depthmaps from this video for me?
yes, it will extract 1 frame from video each time, process the depth info, then use it to generate the pic, then the next frame. but not keep them.
I did this. But it gives me an error because there is no image in ControlNet itself (no input image is given). It does not takes any frames from video. Where is my mistake?

You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
So I put an mp4 file in ControlNet-M2M, then (for example) I set in controlnet1: Enable, Preprocessor: Depth, Model: Depth. And it should record a bunch of pngs with depthmaps from this video for me?
yes, it will extract 1 frame from video each time, process the depth info, then use it to generate the pic, then the next frame. but not keep them.
I did this. But it gives me an error because there is no image in ControlNet itself (no input image is given). It does not takes any frames from video. Where is my mistake?
![]()
you need to Allow other script to control this extension, in the setting page.
soulhighwing Thanks! Now it's working. Is there a way to save all controlnet (depthmaps, poses) frames to some folder?
soulhighwing Thanks! Now it's working. Is there a way to save all controlnet (depthmaps, poses) frames to some folder?
You can modify the python code script to keep them. I believe it's quite simple.
how do i do this? in img2img with controlnet m2m? i am just getting in consistent animation using text2img
try my extension:mov2mov
soulhighwing Thanks! Now it's working. Is there a way to save all controlnet You can modify the python code script to keep them. I believe it's quite simple.
You can use it to create a series of pics from video based on the ControlNet setting. you need to enable the controlnet to make it work. here's some test I made. I use to export the video to png, then drag them into controlnet one by one. now it do the work for me. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
So I put an mp4 file in ControlNet-M2M, then (for example) I set in controlnet1: Enable, Preprocessor: Depth, Model: Depth. And it should record a bunch of pngs with depthmaps from this video for me?
yes, it will extract 1 frame from video each time, process the depth info, then use it to generate the pic, then the next frame. but not keep them.
I did this. But it gives me an error because there is no image in ControlNet itself (no input image is given). It does not takes any frames from video. Where is my mistake?
you need to Allow other script to control this extension, in the setting page. 有用,谢谢!
You can modify the python code script to keep them. I believe it's quite simple.
How? )
Вы можете использовать его для создания серии изображений из видео на основе настроек ControlNet. вам нужно включить controlnet, чтобы заставить его работать. вот какой тест я сделал. Я использую для экспорта видео в png, а затем перетаскиваю их в сеть по одному. теперь он делает работу за меня. https://preview.redd.it/i8vc2hmu3zka1.gif?format=mp4&v=enabled&s=9c3a54db3ff653f882262b2e6bb254315786d11f
this can also be done in the after effect
I wish the m2m also allows a more complex layering or ControlNet, a bit like this Stop Motion. The m
The stopmotion one however takes just limited number of frames. While m2m simplicity takes any video. What's missing is ability to be more coherent while at the same time smooth blending the result.
I get proc.images[1 failed when I try to use this, dunno why
85% 11/13 [00:17<00:03, 1.54s/it]
92% 12/13 [00:18<00:01, 1.54s/it]
100% 13/13 [00:19<00:00, 1.49s/it]
proc.images[1 failed
Loading model from cache: control_openpose-fp16 [9ca67cc5]
Loading preprocessor: openpose
0% 0/13 [00:00<?, ?it/s]
8% 1/13 [00:01<00:18, 1.51s/it]
15% 2/13 [00:03<00:16, 1.51s/it]
23% 3/13 [00:04<00:15, 1.51s/it]
31% 4/13 [00:06<00:13, 1.51s/it]
38% 5/13 [00:07<00:12, 1.51s/it]
46% 6/13 [00:09<00:10, 1.51s/it]
54% 7/13 [00:10<00:09, 1.51s/it]
62% 8/13 [00:12<00:07, 1.51s/it]
69% 9/13 [00:13<00:06, 1.51s/it]
77% 10/13 [00:15<00:04, 1.51s/it]
85% 11/13 [00:16<00:03, 1.51s/it]
92% 12/13 [00:18<00:01, 1.51s/it]
100% 13/13 [00:18<00:00, 1.45s/it]
proc.images[1 failed
Loading model from cache: control_openpose-fp16 [9ca67cc5]
Loading preprocessor: openpose
0% 0/13 [00:00<?, ?it/s]
8% 1/13 [00:01<00:17, 1.49s/it]
This is very similar to a script I've been wishing for ever since ControlNet came out:
I want to be able to specify a folder of ControlNet template images, and then cycle through them all to generate variations. So, instead of a video for input, just a folder of images that can be very different from each other.
That way, I could make a character and make a whole bunch of variations of her (or him) without swapping the ControlNet images manually.
Would it be easy to modify this script to do that? If so, I wish someone could make a fork.
@kwikmn you should be able to do this with the following script https://xanthius.itch.io/multi-frame-rendering-for-stablediffusion
Hello, I followed the above steps: ticked the "Allow other script to control this extension" and "Enabled" Controlnet, but when I press Generate it doesn't do anything (not even an error). Are you doing something more I m missing? thx
ValueError: not enough values to unpack (expected 1, got 0) this is the error I get now when I use this.
make a directory txt2img-images\controlnet-m2m, m2m will gen a gif file as final output in there.