unable to use ControlNet extensions when tensorRTs are in use
with the following Settings --> Show all Pages --> SD Unet set to "automatic". everything runs fine when generating images I get a a nice boost to the iterations per second. However, when i try to use controlnet extensions for example open pose i get the error
"RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)"
Can this error be fixed by converting the .pth files found in "D:\sd.webuiTensor\webui\extensions\sd-webui-controlnet\models" i.e .pth to .ONNX to .TRT
The work around is to set the SD Unet to "none" each time you want to use the control net extensions then set SD Unet back to "automatic" when i'm not using ControlNet extensions.
Thank you so much developers and community for your hard work on these projects, I really appreciate it.
Full error below:
*** Error completing request
*** Arguments: ('task(aoav9synepbw0lt)', 'a man filming the moon landing,', '', [], 30, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000002B01B1B1C00>, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002B01BC7F7F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002B01BC7F9D0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002B01B1B05B0>, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'Denoised', 5.0, 0.0, 0.0, 'Standard operation', 'mp4', 'h264', 2.0, 0.0, 0.0, False, 0.0, True, True, False, False, False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, '', '', 0, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\sd.webuiTensor\webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "D:\sd.webuiTensor\webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\sd.webuiTensor\webui\modules\txt2img.py", line 59, in txt2img
processed = processing.process_images(p)
File "D:\sd.webuiTensor\webui\extensions\sd-webui-prompt-history\lib_history\image_process_hijacker.py", line 21, in process_images
res = original_function(p)
File "D:\sd.webuiTensor\webui\modules\processing.py", line 624, in process_images
res = process_images_inner(p)
File "D:\sd.webuiTensor\webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\sd.webuiTensor\webui\modules\processing.py", line 743, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\sd.webuiTensor\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 350, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "D:\sd.webuiTensor\webui\modules\processing.py", line 996, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\sd.webuiTensor\webui\modules\sd_samplers_kdiffusion.py", line 439, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\sd.webuiTensor\webui\modules\sd_samplers_kdiffusion.py", line 278, in launch_sampling
return func()
File "D:\sd.webuiTensor\webui\modules\sd_samplers_kdiffusion.py", line 439, in
side note: I have successfully converted the modules below to TensorRTs and seen a significant boost in it/s from around 12 to 15 it/s to around 25 to 33 it/s I think my settings for the conversion was min & max size of 512x512, maximum batch of 6, Maximum prompt token count 525
chilloutmix_NiPrunedFp32Fix.safetensors [fc2511737a] @index:0, deliberate_v2.safetensors [9aba26abdf] @index:1, dreamlike-photoreal-2.0.ckpt [fc52756a74] @index:2, dreamshaper_331BakedVae.safetensors [9e9fa0d822] @index:3, f222.ckpt [9e2c6ceff3] @index:4, HassanBlend1.4_Safe.safetensors [b08fdba169] @index:5, lyriel_v16.safetensors [ec6f68ea63] @index:6, mdjrny-v4.ckpt [5d5ad06cc2] @index:7, realisticVisionV20_v20NoVAE.safetensors [c0d1994c73] @index:8, v1-5-pruned-emaonly.safetensors [6ce0161689] @index:9, v2-1_512-ema-pruned.ckpt [88ecb78256] @index:10
As stated in the README, TensorRT does not work with controlnet. CKPT and SAFETENSOR can be altered on the fly without issue. TensorRT isn't as graceful. What you gain in speed you lose in utility. Maybe it could be done, but you would have to speak with the creator of the controlnet models.