multidiffusion-upscaler-for-automatic1111 icon indicating copy to clipboard operation
multidiffusion-upscaler-for-automatic1111 copied to clipboard

it report some error when I use it?

Open wolfyangfeng opened this issue 1 year ago • 1 comments

Sorry but when I try them in t2i or i2i (defalut settings),it always report" RuntimeError: Cannot set version_counter for inference tensor", is any mistake I have made ?

I haven't enable the controlnet when I use multdiffusion and tlied VAE.

ERROR REPORT:

_Traceback (most recent call last): File "H:\stable-diffusion-webui-directml\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "H:\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "H:\stable-diffusion-webui-directml\modules\img2img.py", line 171, in img2img processed = process_images(p) File "H:\stable-diffusion-webui-directml\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "H:\stable-diffusion-webui-directml\modules\processing.py", line 577, in process_images_inner p.init(p.all_prompts, p.all_seeds, p.all_subseeds) File "H:\stable-diffusion-webui-directml\modules\processing.py", line 1017, in init self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image)) File "H:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "H:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, **kwargs) File "H:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "H:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "H:\stable-diffusion-webui-directml\modules\lowvram.py", line 48, in first_stage_model_encode_wrap return first_stage_model_encode(x) File "H:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "H:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in call_impl return forward_call(*input, **kwargs) File "H:\stable-diffusion-webui-directml\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 481, in call return self.vae_tile_forward(x) File "H:\stable-diffusion-webui-directml\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 369, in wrapper ret = fn(*args, **kwargs) File "H:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "H:\stable-diffusion-webui-directml\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 627, in vae_tile_forward tile = z[:, :, input_bbox[2]:input_bbox[3], RuntimeError: Cannot set version_counter for inference tensor

wolfyangfeng avatar Mar 11 '23 05:03 wolfyangfeng

I also encountered this problem,My graphics card is AMD

lyp150213 avatar Mar 11 '23 14:03 lyp150213

Do not enable VAE to GPU. I use AMD card and if I enable VAE to GPU I also get same error message. disable VAE to GPU can work but the generated image has a small color block.By the way, I am currently using the dev branch. It would be even better if the color block issue could be resolved.Currently, the generated images all have a color block in the lower left corner. However, the speed of generating images with this project is really fast(i2i)

qwerkilo avatar Mar 14 '23 23:03 qwerkilo

VAE to GPU

I am using AMD gpu too, can you explain me how disable it in pytorch project? I am trying of using DirectMl in the other people project. Thank u very much

Milor123 avatar May 08 '23 13:05 Milor123