stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: `TypeError` when upscaling
Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
An error message is displayed when performing the high resolution calibration.
Steps to reproduce the problem
- Create
- I'm waiting.
- Check that the high-resolution calibration is in progress.
- Check for an error message.
What should have happened?
The high-resolution previous image should be stored, and then the high-resolution correction should be carried out.
What browsers do you use to access the UI ?
Microsoft Edge
Sysinfo
Console logs
*** Error completing request
*** Arguments: ('task(muozxcbc2hle9j7)', <gradio.routes.Request object at 0x00000240D0809030>, '__positive__,\nBREAK\n\n__001/hair_style_women__ hair, __001/color__ color hair, __001/color__ color eyes, __001/expression__, __001/makeup__, __001/makeup_eyes__, __001/breast_size__, __001/body_type__ body, __001/gaze__, __001/frame__,\nBREAK\n\n{hanbok|__001/fashion_all__|__001/clothes-swimsuit__|__001/clothes-brezier__|__001/clothes-panty__|__001/female_futuristic_clothing__|__001/clothes-lingerie__|__001/female_top__, __001/female_bottom__|__001/female_see_through_clothes__|__001/female_undies__|__001/clothes-preppy_look__|__001/clothes-women_costume__|__001/clothes-women_suit__|__001/clothes-dress__|__001/shorts__|__001/skirt__|__001/wedding_dress__|__001/fashion_all__|__001/fashion_spring__|__001/fashion_summer__|__001/fashion_fall__|__001/fashion_winter__}, {__001/clothes-stockings__|__001/clothes-shoes__|bare foot},\nBREAK\n\n{__001/19_places__|__001/background__|__001/best_cities__|__001/landmark__|__001/landscapes__|__001/landscape_composition__|__001/flower_garden__|__001/place__|__001/ocean__|__001/seaside_scenery__|__001/place_indoor__|__001/place_outdoor__|__001/travel_list_100__|__001/spring__|__001/summer__|__001/autumn__|__001/winter__|__001/global_destinations_500__|__001/travel_list_100__|__001/world_walks__|__001/world_small_towns__|__001/world_hikes__|__001/wonders_list__|__001/weirdest_places__}, __001/angle__,\nBREAK\n\n{__001/female-poses__|__001/pose__|__001/pose_extra__}, {daytime|evening|night|dawn|sunset|sunrise}, __001/weather__, ', '__negative__, ac_neg1,', [], 20, 'DPM++ SDE Karras', 1, 1, 7, 768, 512, True, 0.37, 2, 'R-ESRGAN 4x+', 20, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000240D080A170>, False, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, 0.15, 3, 0.4, 4, 'bicubic', 0.5, 2, True, False, True, False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0, 1, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 0.95, 1, 1.3, 1, 0, 0, 0, None, False, 1, False, '', True, False, False, True, True, 4, 2, 0.1, 1, 1, 0, 0.4, 7, True, False, True, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'Ultralytics(face_yolov8m.pt)', 0.4, 4, 0.3, False, 0.26, True, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Whole picture', 32, '', False, False, False, 0.4, 0, 1, False, 'Inpaint', 0.85, 0.4, 10, False, True, 'None', 1.5, 'None', 'nomal', 'None', False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
To load target model SDXL
Begin to load 1 model
loading in lowvram mode 726.4464387893677
Moving model(s) has taken 0.46 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:23<00:00, 7.16s/it]
To load target model AutoencoderKL1s/it]
Begin to load 1 model
Moving model(s) has taken 0.46 seconds
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
res = process_images_inner(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions\sd-webui-bmab\sd_bmab\sd_override\txt2img.py", line 50, in sample
sd_models.reload_model_weights(info=self.hr_checkpoint_info)
TypeError: reload_model_weights() got an unexpected keyword argument 'info'
reload_model_weights() got an unexpected keyword argument 'info'
*** Error completing request
*** Arguments: ('task(ku1303dy4g62xpr)', <gradio.routes.Request object at 0x00000240D0809780>, '__positive__,\nBREAK\n\n__001/hair_style_women__ hair, __001/color__ color hair, __001/color__ color eyes, __001/expression__, __001/makeup__, __001/makeup_eyes__, __001/breast_size__, __001/body_type__ body, __001/gaze__, __001/frame__,\nBREAK\n\n{hanbok|__001/fashion_all__|__001/clothes-swimsuit__|__001/clothes-brezier__|__001/clothes-panty__|__001/female_futuristic_clothing__|__001/clothes-lingerie__|__001/female_top__, __001/female_bottom__|__001/female_see_through_clothes__|__001/female_undies__|__001/clothes-preppy_look__|__001/clothes-women_costume__|__001/clothes-women_suit__|__001/clothes-dress__|__001/shorts__|__001/skirt__|__001/wedding_dress__|__001/fashion_all__|__001/fashion_spring__|__001/fashion_summer__|__001/fashion_fall__|__001/fashion_winter__}, {__001/clothes-stockings__|__001/clothes-shoes__|bare foot},\nBREAK\n\n{__001/19_places__|__001/background__|__001/best_cities__|__001/landmark__|__001/landscapes__|__001/landscape_composition__|__001/flower_garden__|__001/place__|__001/ocean__|__001/seaside_scenery__|__001/place_indoor__|__001/place_outdoor__|__001/travel_list_100__|__001/spring__|__001/summer__|__001/autumn__|__001/winter__|__001/global_destinations_500__|__001/travel_list_100__|__001/world_walks__|__001/world_small_towns__|__001/world_hikes__|__001/wonders_list__|__001/weirdest_places__}, __001/angle__,\nBREAK\n\n{__001/female-poses__|__001/pose__|__001/pose_extra__}, {daytime|evening|night|dawn|sunset|sunrise}, __001/weather__, ', '__negative__, ac_neg1,', [], 20, 'DPM++ SDE Karras', 1, 1, 7, 768, 512, True, 0.37, 2, 'R-ESRGAN 4x+', 20, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000240D080B7F0>, False, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, 0.15, 3, 0.4, 4, 'bicubic', 0.5, 2, True, False, True, False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0, 1, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 0.95, 1, 1.3, 1, 0, 0, 0, None, False, 1, False, '', True, False, False, True, True, 4, 2, 0.1, 1, 1, 0, 0.4, 7, True, False, True, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'Ultralytics(face_yolov8m.pt)', 0.4, 4, 0.3, False, 0.26, True, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Whole picture', 32, '', False, False, False, 0.4, 0, 1, False, 'Inpaint', 0.85, 0.4, 10, False, True, 'None', 1.5, 'None', 'nomal', 'None', False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x00000240D0CB2DA0>]: AttributeError
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\extra_networks.py", line 135, in activate
extra_network.activate(p, extra_network_args)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions-builtin\Lora\extra_networks_lora.py", line 43, in activate
networks.load_networks(names, te_multipliers, unet_multipliers, dyn_dims)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions-builtin\Lora\networks.py", line 51, in load_networks
compiled_lora_targets.append([a.filename, b, c])
AttributeError: 'NoneType' object has no attribute 'filename'
To load target model SDXL
Begin to load 1 model
loading in lowvram mode 722.5157747268677
Moving model(s) has taken 0.46 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:13<00:00, 3.69s/it]
To load target model AutoencoderKL1s/it]
Begin to load 1 model
Moving model(s) has taken 0.39 seconds
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
res = process_images_inner(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\extensions\sd-webui-bmab\sd_bmab\sd_override\txt2img.py", line 50, in sample
sd_models.reload_model_weights(info=self.hr_checkpoint_info)
TypeError: reload_model_weights() got an unexpected keyword argument 'info'
reload_model_weights() got an unexpected keyword argument 'info'
*** Error completing request
*** Arguments: ('task(1hpej9ze7mc1r0u)', <gradio.routes.Request object at 0x0000023CAE602050>, '<lora:adapted_model_converted:0.7> (RAW photo:1.4, best quality:1.4, photo realistic:1.4, realistic:1.4), (cute korean girl), (1girl,solo), detailed background, pale skin, (intricate details:1.3), perfect eyes, navel, cameltoe, covered nipple:0.1 sharp_pointed_nose:1.4, (detailed skin:1.3), sharp focus, delicate,\nBREAK\n\nVintage waves hair, aqua_blue color hair, Sepia color eyes, Grateful, Makeup remover, cream eyeliner, huge breasts, elegant body, looking at another, upper body,\nBREAK\n\nDenim_button-up_shirtcorduroy_skirtwhite_ankle_bootsblack_crossbody_baglayered_necklace_set, Peep-toe booties,\nBREAK\n\nHawaii Volcanoes National Park, vanishing point,\nBREAK\n\nPosing with hands behind the back, looking serious, sunset, Typhoon,', 'EasyNegativeV2, nsfw, (worst quality, low quality, normal quality:1.3), (deformed, distorted, disfigured:1.2), (blurry:1.2), (bad anatomy, extra_anatomy:1.3, wrong anatomy), poorly drawn, ugly face, glans, fat, missing fingers, extra fingers, extra arms, extra legs, ((watermark, text, logo,symbol)), extra limb, missing limb, floating limbs, error, jpeg artifacts, cropped, bad anatomy, double navel, muscle, cleavage, bad detailed background, (stomach muscles), (nipple over clothes:1.2), (nipples sticking out of clothes:1.2), ((abs:1.2)), ((stomach muscles:1.2)), (mutated hands and fingers:1.2), disconnected limbs, mutation, mutated,', [], 20, 'DPM++ 2M SDE Karras', 1, 1, 7, 768, 512, True, 0.39, 2, '4x-UltraMix_Balanced', 21, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000240D082B4C0>, True, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, 0.15, 3, 0.4, 4, 'bicubic', 0.5, 2, True, False, True, False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0, 1, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 0.95, 1, 1.3, 1, 0, 0, 0, None, False, 1, False, '', True, False, False, True, True, 4, 2, 0.1, 1, 1, 0, 0.4, 7, True, False, True, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'Ultralytics(face_yolov8m.pt)', 0.4, 4, 0.3, False, 0.26, True, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Whole picture', 32, '', False, False, False, 0.4, 0, 1, False, 'Inpaint', 0.85, 0.4, 10, False, True, 'None', 1.5, 'None', 'nomal', 'None', False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
Additional information
No response
I also get that error when using HIres fix, and using SwinIR_4x as an upscaler. the other methods do not cause this issue, or at least not 100% of the time. still testing. please fix SwinIR_4x as its my go-to upscaler ATM.
I also get that error when using HIres fix, and using SwinIR_4x as an upscaler. the other methods do not cause this issue, or at least not 100% of the time. still testing. please fix SwinIR_4x as its my go-to upscaler ATM.
I don't use SwinIR_4x. I mainly use R-ESRGAN 4x+.
I don't use SwinIR_4x. I mainly use R-ESRGAN 4x+.
I tried R-ESRGAN 4x+ , 10 steps, 0.4 denoise, 1.5 upscale 1024x1024 to 1536x1536, and it worked on my machine.
can you test if SwinIR_4x works for you?
SwinIR_4x is not used at all.
And, it's being reproduced intermittently, so I'm just going to use it.
I think you'll revise it one day.
I get that error as well. I'm not able to generate anything. I also noticed that for some reason, it isn't implementing ANY of my SDXL LoRAs
I get that error as well. I'm not able to generate anything. I also noticed that for some reason, it isn't implementing ANY of my SDXL LoRAs
I don't even use Lora, but it's like that.
The traceback indicates an issue with https://github.com/portu-sim/sd-webui-bmab, not Forge or the webui. Open an issue there instead. https://github.com/portu-sim/sd-webui-bmab/issues/new
The traceback indicates an issue with https://github.com/portu-sim/sd-webui-bmab, not Forge or the webui. Open an issue there instead. https://github.com/portu-sim/sd-webui-bmab/issues/new
BMAB also works normally when it's working. This problem has nothing to do with BMAB. I tried deleting it and it's the same. Other users are showing the same issue even though BMAB is not installed.
It seems that the BMAB is also affected by the error.
Can you post a traceback for an instance when BMAB is disabled completely?
To load target model SDXL
Begin to load 1 model
loading in lowvram mode 1078.425479888916
Moving model(s) has taken 0.66 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:38<00:00, 1.93s/it]
To load target model AutoencoderKL███████████████████████████████████████████████████▍ | 40/41 [02:11<00:01, 1.92s/it]
Begin to load 1 model
Moving model(s) has taken 0.60 seconds
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 750, in process_images
res = process_images_inner(p)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\processing.py", line 1290, in sample
sd_models.reload_model_weights(info=self.hr_checkpoint_info)
TypeError: reload_model_weights() got an unexpected keyword argument 'info'
reload_model_weights() got an unexpected keyword argument 'info'
*** Error completing request
*** Arguments: ('task(d5oe7iluiyh5scd)', <gradio.routes.Request object at 0x0000023210C26CB0>, '__positive__,\nBREAK\n\n__001/hair_style_women__ hair, __001/color__ color hair, __001/color__ color eyes, __001/expression__, __001/makeup__, __001/makeup_eyes__, __001/breast_size__, __001/body_type__ body, __001/gaze__, __001/frame__,\nBREAK\n\n{hanbok|__001/fashion_all__|__001/clothes-swimsuit__|__001/clothes-brezier__|__001/clothes-panty__|__001/female_futuristic_clothing__|__001/clothes-lingerie__|__001/female_top__, __001/female_bottom__|__001/female_see_through_clothes__|__001/female_undies__|__001/clothes-preppy_look__|__001/clothes-women_costume__|__001/clothes-women_suit__|__001/clothes-dress__|__001/shorts__|__001/skirt__|__001/wedding_dress__|__001/fashion_all__|__001/fashion_spring__|__001/fashion_summer__|__001/fashion_fall__|__001/fashion_winter__}, {__001/clothes-stockings__|__001/clothes-shoes__|bare foot},\nBREAK\n\n{__001/19_places__|__001/background__|__001/best_cities__|__001/landmark__|__001/landscapes__|__001/landscape_composition__|__001/flower_garden__|__001/place__|__001/ocean__|__001/seaside_scenery__|__001/place_indoor__|__001/place_outdoor__|__001/travel_list_100__|__001/spring__|__001/summer__|__001/autumn__|__001/winter__|__001/global_destinations_500__|__001/travel_list_100__|__001/world_walks__|__001/world_small_towns__|__001/world_hikes__|__001/wonders_list__|__001/weirdest_places__}, __001/angle__,\nBREAK\n\n{__001/female-poses__|__001/pose__|__001/pose_extra__}, {daytime|evening|night|dawn|sunset|sunrise}, __001/weather__, ', '__negative__, ac_neg1,', [], 20, 'DPM++ 2M SDE Karras', 1, 1, 7, 512, 768, True, 0.39, 2, '4x-UltraMix_Balanced', 21, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.03, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='depth_midas', model='diffusers_xl_depth_full [2f51180b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 0.99, 'Half Cosine Up', 0, 'Power Up', 3, 13.5, 'enable', 'MEAN', 'AD', 0.97, False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, 0.5, 2, False, True, False, {'ad_model': 'deepfashion2_yolov8s-seg.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.5, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 0.7, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000023210909960>, False, False, 'positive', 'comma', 0, False, False, 'start', '', 0, '', [], 0, '', [], 0, '', [], False, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "O:\AI\SynologyDrive\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
Thank you, that's definitely more useful info. I'll re-open this now.
same error occured when try to use animatediff
*** Error running before_process: D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\scripts.py", line 795, in before_process script.before_process(p, *script_args) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 63, in before_process motion_module.inject(p.sd_model, params.model) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 112, in inject self._set_ddim_alpha(sd_model) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 178, in _set_ddim_alpha self.prev_alpha_cumprod_original = sd_model.alphas_cumprod_original File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1695, in getattr raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'") AttributeError: 'LatentDiffusion' object has no attribute 'alphas_cumprod_original'
0%| | 0/20 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\script_callbacks.py", line 233, in cfg_denoiser_callback c.callback(params) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 90, in animatediff_on_cfg_denoiser ad_params.text_cond = ad_params.prompt_scheduler.multi_cond(cfg_params.text_cond, prompt_closed_loop) AttributeError: 'NoneType' object has no attribute 'multi_cond'
0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 410, in reduce return _apply_recipe(recipe, tensor, reduction_type=reduction) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 233, in _apply_recipe _reconstruct_from_shape(recipe, backend.shape(tensor)) File "D:\maishouai-webui\miaoshouai-sd-webui-forge\system\python\lib\site-packages\einops\einops.py", line 198, in _reconstruct_from_shape_uncached raise EinopsError("Shape mismatch, can't divide axis of length {} in chunks of {}".format( einops.EinopsError: Shape mismatch, can't divide axis of length 2 in chunks of 16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 921, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\processing.py", line 1273, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "D:\maishouai-webui\miaoshouai-sd-webui-forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in