sd-webui-text2video
sd-webui-text2video copied to clipboard
[Feature Request]: Add batch input video mode to vid2vid
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
With the popularity of the zeroscope xl upscaling model on the rise it would be great to have a way to run vid2vid on an entire folder of video files one after the other. The current workflow involves a lot of stopping and starting which hinders making longer projects.
Proposed workflow
- Select batch input mode on the vid2vid tab.
- Select path to folder.
- Press generate.
Additional information
No response
I support this request. Would be so so so so much helpful!
Haven't implemented a batch folder yet, but with #195 you can drop multiple videos into the vid2vid queue and it will process them all at once (and infer prompt from the filename if you leave the prompt blank). Let me know if you want me to add folder support as well.
Hi, you are so super kind. I guess you cannot understand how grateful I am, this just blows me apart from happiness. Was such a monstrous drag to do a hundred files manually. So yes, YES, a folder support would be even better. Many many thanks Mischa
Am Di., 11. Juli 2023 um 16:49 Uhr schrieb bfasenfest < @.***>:
Haven't implemented a batch folder yet, but with #195 https://github.com/kabachuha/sd-webui-text2video/pull/195 you can drop multiple videos into the vid2vid queue and it will process them all at once (and infer prompt from the filename if you leave the prompt blank). Let me know if you want me to add folder support as well.
— Reply to this email directly, view it on GitHub https://github.com/kabachuha/sd-webui-text2video/issues/196#issuecomment-1630971705, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMF56BPTHOQQPI5WLNFXS7DXPVRU7ANCNFSM6AAAAAA2DCL66I . You are receiving this because you commented.Message ID: @.***>
--
Was just trying this batch functionality, but was not able to pull several video files as a group into the input video window. What am I doing wrong?
The pull request hasn't been merged yet into the main branch. If you want to test it out youll need to replace the text2video extension folder code with my code here: https://github.com/bfasenfest/sd-webui-text2video/tree/v2v-queue
Wow, thanks. Know so many people that are waiting eagerly...
Am Di., 11. Juli 2023 um 17:54 Uhr schrieb bfasenfest < @.***>:
The pull request hasn't been merged yet into the main branch. If you want to test it out youll need to replace the text2video extension folder code with my code here: https://github.com/bfasenfest/sd-webui-text2video/tree/v2v-queue
— Reply to this email directly, view it on GitHub https://github.com/kabachuha/sd-webui-text2video/issues/196#issuecomment-1631080764, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMF56BKEKLIPW7OU3QUASE3XPVZLXANCNFSM6AAAAAA2DCL66I . You are receiving this because you commented.Message ID: @.***>
--
Get this error frequently, am a bit helpless and would be glad for your advice:
STARTING VAE ON GPU. 21 CHUNKS TO PROCESS VAE HALVED DECODING FRAMES VAE FINISHED torch.Size([41, 3, 320, 576]) output/mp4s/20230711_204943286041.mp4 text2video finished, saving frames to C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751 Got a request to stitch frames to video using FFmpeg. Frames: C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751%06d.png To Video: C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751\Cut to a medium shot from a high angle of the tatter creating knots and loops to form a lace pattern the workspace is lit....mp4 Stitching video...
Stitching video...
Video stitching [0;32mdone [0m in 0.10 seconds! t2v complete, result saved at C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751
Traceback (most recent call last): File "C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\t2v_helpers\render.py", line 30, in run vids_pack = process_modelscope(args_dict) File "C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\modelscope\process_modelscope.py", line 274, in process_modelscope mp4 = open(outdir_current + os.path.sep + prompt_name + f".mp4", 'rb').read() FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751\Cut to a medium shot from a high angle of the tatter creating knots and loops to form a lace pattern the workspace is lit....mp4' Exception occurred: [Errno 2] No such file or directory: 'C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751\Cut to a medium shot from a high angle of the tatter creating knots and loops to form a lace pattern the workspace is lit....mp4'
Am Di., 11. Juli 2023 um 17:57 Uhr schrieb Mischa Schaub < @.***>:
Wow, thanks. Know so many people that are waiting eagerly...
Am Di., 11. Juli 2023 um 17:54 Uhr schrieb bfasenfest < @.***>:
The pull request hasn't been merged yet into the main branch. If you want to test it out youll need to replace the text2video extension folder code with my code here: https://github.com/bfasenfest/sd-webui-text2video/tree/v2v-queue
— Reply to this email directly, view it on GitHub https://github.com/kabachuha/sd-webui-text2video/issues/196#issuecomment-1631080764, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMF56BKEKLIPW7OU3QUASE3XPVZLXANCNFSM6AAAAAA2DCL66I . You are receiving this because you commented.Message ID: @.***>
--
--
Sorry. was my mistake. I used Videos that I interpolated with topaz from 8 frames per second to 24 frames, and so they had 72 frames. So I will interpolate after the upscaling, and then this should be OK. When I use the videos that zersoscope V2 outputs, then everything works fine, your livesaving program eats just up my 140 clips without any problem. GREAT!
Am Di., 11. Juli 2023 um 20:57 Uhr schrieb Mischa Schaub < @.***>:
Get this error frequently, am a bit helpless and would be glad for your advice:
STARTING VAE ON GPU. 21 CHUNKS TO PROCESS VAE HALVED DECODING FRAMES VAE FINISHED torch.Size([41, 3, 320, 576]) output/mp4s/20230711_204943286041.mp4 text2video finished, saving frames to C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751 Got a request to stitch frames to video using FFmpeg. Frames: C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751%06d.png To Video: C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751\Cut to a medium shot from a high angle of the tatter creating knots and loops to form a lace pattern the workspace is lit....mp4 Stitching video...
Stitching video...
Video stitching [0;32mdone [0m in 0.10 seconds! t2v complete, result saved at C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751
Traceback (most recent call last): File "C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\t2v_helpers\render.py", line 30, in run vids_pack = process_modelscope(args_dict) File "C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui/extensions/sd-webui-text2video/scripts\modelscope\process_modelscope.py", line 274, in process_modelscope mp4 = open(outdir_current + os.path.sep + prompt_name + f".mp4", 'rb').read() FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751\Cut to a medium shot from a high angle of the tatter creating knots and loops to form a lace pattern the workspace is lit....mp4' Exception occurred: [Errno 2] No such file or directory: 'C:\Users\vv\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\stable-diffusion-webui\outputs/img2img-images\text2video\20230711204751\Cut to a medium shot from a high angle of the tatter creating knots and loops to form a lace pattern the workspace is lit....mp4'
Am Di., 11. Juli 2023 um 17:57 Uhr schrieb Mischa Schaub < @.***>:
Wow, thanks. Know so many people that are waiting eagerly...
Am Di., 11. Juli 2023 um 17:54 Uhr schrieb bfasenfest < @.***>:
The pull request hasn't been merged yet into the main branch. If you want to test it out youll need to replace the text2video extension folder code with my code here: https://github.com/bfasenfest/sd-webui-text2video/tree/v2v-queue
— Reply to this email directly, view it on GitHub https://github.com/kabachuha/sd-webui-text2video/issues/196#issuecomment-1631080764, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMF56BKEKLIPW7OU3QUASE3XPVZLXANCNFSM6AAAAAA2DCL66I . You are receiving this because you commented.Message ID: @.***>
--
--
--
Good to hear that everything is working
Just had to do a bit shorter and simpler prompts without any special characters and now everything works
Would it be possible for you to output every rendered video file out towards one folder close to root? At the moment I must collect these together deep down in the filesystem and in strange places, where they are not easy to find and each one must be chased from its own folder. This would be so helpful! Many thanks Mischa