stable-diffusion-webui-amdgpu icon indicating copy to clipboard operation
stable-diffusion-webui-amdgpu copied to clipboard

Ultimate SD upscale - Cannot set version_counter for inference tensor[Bug]:

Open nicodem09 opened this issue 1 year ago • 30 comments

Checklist

  • [ ] The issue exists after disabling all extensions
  • [ ] The issue exists on a clean installation of webui
  • [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [ ] The issue exists in the current version of the webui
  • [ ] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

When trying to use Ultimate SD upscale 4x-UltraSharp DAT x2 DAT x3 DAT x4 R-ESRGAN 4x+ R-ESRGAN 4x+ Anime6B

it gives me this error "Cannot set version_counter for inference tensor" this started yesterday after the new update image

Steps to reproduce the problem

Go to img2img and enable control net tile use the script Ultimate SD upscale choose any of these : -4x-UltraSharp -DAT x2 -DAT x3 -DAT x4 -R-ESRGAN 4x+ -R-ESRGAN 4x+ Anime6B

And Click generate

What should have happened?

It should generate and work properly

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-07-28-06-28.json

Console logs

Canva size: 1024x1024
Image size: 512x512
Scale factor: 2
Upscaling iteration 1 with scale factor 2
tiled upscale:   0%|                                                                             | 0/9 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(3zt7ahxm5zvhsxg)', <gradio.routes.Request object at 0x000001697FF94E20>, 0, 'high quality, ', '', [], <PIL.Image.Image image mode=RGBA size=512x512 at 0x16981A57A00>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 7, 1.5, 0.4, 0.0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 'upload', None, 10, False, 1, 0.5, 4, 0, 0.5, 2, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', False, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, True, 0.85, 0.6, 4, False, False, 512, 64, True, True, True, False, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=True, module='tile_resample', model='control_v11f1e_sd15_tile [a371b31b]', weight=1.0, image={'image': array([[[18, 17, 22],
***         [ 9,  8, 14],
***         [ 5,  4,  9],
***         ...,
***         [ 5,  7, 11],
***         [ 6,  7, 12],
***         [ 7,  9, 13]],
***
***        [[13, 15, 19],
***         [ 3,  5,  8],
***         [ 1,  2,  5],
***         ...,
***         [ 1,  5,  8],
***         [ 0,  4,  7],
***         [ 5,  7, 12]],
***
***        [[15, 15, 20],
***         [ 2,  1,  6],
***         [ 3,  3,  7],
***         ...,
***         [ 2,  4,  8],
***         [ 1,  5,  8],
***         [ 3,  6, 11]],
***
***        ...,
***
***        [[20, 21, 27],
***         [ 6,  8, 15],
***         [ 5,  9, 15],
***         ...,
***         [17, 22, 29],
***         [16, 22, 28],
***         [19, 22, 31]],
***
***        [[18, 19, 26],
***         [ 7, 10, 16],
***         [ 3,  9, 12],
***         ...,
***         [16, 24, 28],
***         [16, 23, 28],
***         [20, 23, 31]],
***
***        [[22, 22, 29],
***         [11, 11, 19],
***         [ 8,  9, 16],
***         ...,
***         [21, 23, 31],
***         [22, 23, 32],
***         [23, 24, 31]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.RESIZE: 'Just Resize'>, low_vram=True, processor_res=512, threshold_a=1.0, threshold_b=0.5, guidance_start=0.0, guidance_end=1.0, pixel_perfect=True, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.TILE: 'Tile'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 3, True, 0, False, 8, 0, 2, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\img2img.py", line 240, in img2img
        processed = modules.scripts.scripts_img2img.run(p, *args)
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\scripts.py", line 780, in run
        processed = script.run(p, *script_args)
      File "D:\AI\stable-diffusion-webui-amdgpu\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 558, in run
        upscaler.upscale()
      File "D:\AI\stable-diffusion-webui-amdgpu\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 83, in upscale
        self.image = self.upscaler.scaler.upscale(self.image, value, self.upscaler.data_path)
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\upscaler.py", line 68, in upscale
        img = self.do_upscale(img, selected_model)
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\esrgan_model.py", line 36, in do_upscale
        return esrgan_upscale(model, img)
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\esrgan_model.py", line 57, in esrgan_upscale
        return upscale_with_model(
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\upscaler_utils.py", line 74, in upscale_with_model
        output = upscale_pil_patch(model, tile)
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\upscaler_utils.py", line 48, in upscale_pil_patch
        return torch_bgr_to_pil_image(model(tensor))
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\spandrel\__helpers\model_descriptor.py", line 472, in __call__
        output = self._call_fn(self.model, image)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\spandrel\__helpers\model_descriptor.py", line 439, in <lambda>
        self._call_fn = call_fn or (lambda model, image: model(image))
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\spandrel\architectures\ESRGAN\arch\RRDB.py", line 142, in forward
        return self.model(x)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
        input = module(input)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "D:\AI\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\dml\amp\autocast_mode.py", line 43, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
      File "D:\AI\stable-diffusion-webui-amdgpu\modules\dml\amp\autocast_mode.py", line 15, in forward
        return op(*args, **kwargs)
    RuntimeError: Cannot set version_counter for inference tensor

Additional information

I updated my stable diffusion yesterday's update

nicodem09 avatar Jul 28 '24 06:07 nicodem09