InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

Getting the AttributeError: 'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'

Open Lielhercogs opened this issue 2 years ago • 6 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

macOS

GPU

amd

VRAM

32Gb

What happened?

After installing latest InvokeAI (by automatic script) and attempting to generate the first image, I get the error message in Terminal: AttributeError: 'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'

Here is the full log:

Image Generation Parameters:

{'prompt': 'Flowers on the moon', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 512, 'width': 512, 'sampler_name': 'k_lms', 'seed': 1033551626, 'progress_images': True, 'progress_latents': False, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}

ESRGAN Parameters: False Facetool Parameters: False Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/einars/invokeai/.venv/lib/python3.10/site-packages/diffusers/schedulers/scheduling_lms_discrete.py:268: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] 0%| | 0/50 [00:01<?, ?it/s] Generating: 0%| | 0/1 [00:01<?, ?it/s] 'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'

Traceback (most recent call last): File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/invokeai/backend/invoke_ai_web_server.py", line 1204, in generate_images self.generate.prompt2image( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/generate.py", line 516, in prompt2image results = generator.generate( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/base.py", line 112, in generate image = make_image(x_T) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/txt2img.py", line 40, in make_image pipeline_output = pipeline.image_from_embeddings( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/diffusers_pipeline.py", line 340, in image_from_embeddings result_latents, result_attention_map_saver = self.latents_from_embeddings( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/diffusers_pipeline.py", line 366, in latents_from_embeddings result: PipelineIntermediateState = infer_latents_from_embeddings( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/diffusers_pipeline.py", line 183, in call callback(result) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/invokeai/backend/invoke_ai_web_server.py", line 1200, in diffusers_step_callback_adapter return image_progress(progress_state.latents, progress_state.step) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/invokeai/backend/invoke_ai_web_server.py", line 954, in image_progress image = self.generate.sample_to_image(sample) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/generate.py", line 1017, in sample_to_image return self._make_base().sample_to_image(samples) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/base.py", line 137, in sample_to_image x_samples = self.model.decode_first_stage(samples) AttributeError: 'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'

Model change requested: stable-diffusion-2.1-768 Current VRAM usage: 0.00G Offloading stable-diffusion-1.5 to CPU Loading diffusers model from stabilityai/stable-diffusion-2-1 | Using more accurate float32 precision | Calculating sha256 hash of model files | sha256 = dbc9b6cb75d5b5c463c242c5d14247ee6b6f4036284a8ae10f73b8ce2bcfe05a (27 files hashed in 6.08s) | Default image dimensions = 768 x 768 Model loaded in 7.26s Textual inversions available: Setting Sampler to k_lms (LMSDiscreteScheduler)

Image Generation Parameters:

{'prompt': 'Flowers on the moon', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 512, 'width': 512, 'sampler_name': 'k_lms', 'seed': 2939139260, 'progress_images': True, 'progress_latents': False, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}

ESRGAN Parameters: False Facetool Parameters: False 0%| | 0/50 [00:00<?, ?it/s] Generating: 0%| | 0/1 [00:00<?, ?it/s] 'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'

Traceback (most recent call last): File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/invokeai/backend/invoke_ai_web_server.py", line 1204, in generate_images self.generate.prompt2image( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/generate.py", line 516, in prompt2image results = generator.generate( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/base.py", line 112, in generate image = make_image(x_T) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/txt2img.py", line 40, in make_image pipeline_output = pipeline.image_from_embeddings( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/diffusers_pipeline.py", line 340, in image_from_embeddings result_latents, result_attention_map_saver = self.latents_from_embeddings( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/diffusers_pipeline.py", line 366, in latents_from_embeddings result: PipelineIntermediateState = infer_latents_from_embeddings( File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/diffusers_pipeline.py", line 183, in call callback(result) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/invokeai/backend/invoke_ai_web_server.py", line 1200, in diffusers_step_callback_adapter return image_progress(progress_state.latents, progress_state.step) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/invokeai/backend/invoke_ai_web_server.py", line 954, in image_progress image = self.generate.sample_to_image(sample) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/generate.py", line 1017, in sample_to_image return self._make_base().sample_to_image(samples) File "/Users/einars/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/generator/base.py", line 137, in sample_to_image x_samples = self.model.decode_first_stage(samples) AttributeError: 'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'

Screenshots

No response

Additional context

No response

Contact Details

[email protected]

Lielhercogs avatar Feb 13 '23 20:02 Lielhercogs

I also just got the same issue.

OS

Windows 10 64-bit

GPU

Nvidia RTX 3060

VRAM

12GB

PYTHON

Python 3.10.4

My case is a bit different, as I was able to generate a lot of images, but now it suddenly stopped working. Changing to different models doesn't seem to work.

>> Image Generation Parameters:

{'prompt': 'mountains near a lake', 'iterations': 1, 'steps': 30, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 768, 'width': 512, 'sampler_name': 'k_lms', 'seed': 3515420761, 'progress_images': True, 'progress_latents': False, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}

>> ESRGAN Parameters: False
>> Facetool Parameters: False
  0%|                                                                                           | 0/30 [00:00<?, ?it/s]
Generating:   0%|                                                                                | 0/1 [00:00<?, ?it/s]
'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'


Traceback (most recent call last):
  File "C:\invokeai\.venv\lib\site-packages\invokeai\backend\invoke_ai_web_server.py", line 1204, in generate_images
    self.generate.prompt2image(
  File "C:\invokeai\.venv\lib\site-packages\ldm\generate.py", line 516, in prompt2image
    results = generator.generate(
  File "C:\invokeai\.venv\lib\site-packages\ldm\invoke\generator\base.py", line 112, in generate
    image = make_image(x_T)
  File "C:\invokeai\.venv\lib\site-packages\ldm\invoke\generator\txt2img.py", line 40, in make_image
    pipeline_output = pipeline.image_from_embeddings(
  File "C:\invokeai\.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 340, in image_from_embeddings
    result_latents, result_attention_map_saver = self.latents_from_embeddings(
  File "C:\invokeai\.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 366, in latents_from_embeddings
    result: PipelineIntermediateState = infer_latents_from_embeddings(
  File "C:\invokeai\.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 183, in __call__
    callback(result)
  File "C:\invokeai\.venv\lib\site-packages\invokeai\backend\invoke_ai_web_server.py", line 1200, in diffusers_step_callback_adapter
    return image_progress(progress_state.latents, progress_state.step)
  File "C:\invokeai\.venv\lib\site-packages\invokeai\backend\invoke_ai_web_server.py", line 954, in image_progress
    image = self.generate.sample_to_image(sample)
  File "C:\invokeai\.venv\lib\site-packages\ldm\generate.py", line 1017, in sample_to_image
    return self._make_base().sample_to_image(samples)
  File "C:\invokeai\.venv\lib\site-packages\ldm\invoke\generator\base.py", line 137, in sample_to_image
    x_samples = self.model.decode_first_stage(samples)
AttributeError: 'StableDiffusionGeneratorPipeline' object has no attribute 'decode_first_stage'

mazsinger avatar Feb 13 '23 23:02 mazsinger

I had this error message after I tried out making diffuser models from my checkpoints. it turned out that the error went away when I turned off "Display in progress images-Accurate". "None" and "Fast" work correctly. Have you converted any models to diffusers?

TigerFox57 avatar Feb 14 '23 00:02 TigerFox57

I always had it on fast, but I changed it to Accurate and it worked for a bit. If I change it back to fast it works again. Thanks for the help @TigerFox57 . It's still a bug that should be fixed, but at least it works. I just hope @Lielhercogs also gets it to work.

Have you converted any models to diffusers? I only used the models from the automated script.

mazsinger avatar Feb 14 '23 00:02 mazsinger

same issue here: with stable diffusion 2.1 (automatic installation)

etziok avatar Feb 14 '23 04:02 etziok

Hello!

Yes, switching "Display in-progress images" to "off" or "fast" has some positive impact on the performance, but then the Python regularly crashes, especially during "image to image" use: The terminal says:

ESRGAN Parameters: False Facetool Parameters: False using provided input image of size 2560x2816 image will be resized to fit inside a box 832x832 in size. after adjusting image dimensions to be multiples of 64, init image is 704x832 Generating: 0%| | 0/1 [00:00<?, ?it/s/AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:724: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 2**32' zsh: abort /Users/einars/Invoke23/invoke.sh einars@Einars-MacBook-Pro InvokeAI-Installer % /opt/homebrew/Cellar/[email protected]/3.10.10/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

Another session:

Image Generation Parameters:

{'prompt': 'Abstract palette knife painting of beautiful grandmother [poorly drawn, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, blurry, bad anatomy, blurred, watermark, grainy, signature, cut off, draft, unwanted, distorted, grotesque, chaotic, misaligned, smudged, mutilated, asymmetrical, pixelated, low-resolution, unnatural, off-balance, poorly rendered, over-exposed, grainy, dark, sketchy, distorted features, mismatched, out of proportion, scribbled, botched.]', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 832, 'width': 832, 'sampler_name': 'k_lms', 'seed': 731397798, 'progress_images': False, 'progress_latents': False, 'save_intermediates': 5, 'generation_mode': 'img2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'init_img': 'outputs/init-images/000007.e712c329.526317674.postprocessed.1903...', 'strength': 0.75, 'fit': True, 'variation_amount': 0}

ESRGAN Parameters: False Facetool Parameters: False using provided input image of size 2560x2816 This input is larger than your defaults. If you run out of memory, please use a smaller image. image will be resized to fit inside a box 832x832 in size. after adjusting image dimensions to be multiples of 64, init image is 704x832 Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/einars/Invoke23/.venv/lib/python3.10/site-packages/diffusers/schedulers/scheduling_lms_discrete.py:268: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] /AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:724: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 2**32' zsh: abort /Users/einars/Invoke23/invoke.sh einars@Einars-MacBook-Pro InvokeAI-Installer % /opt/homebrew/Cellar/[email protected]/3.10.10/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

Very unstable performance.

Lielhercogs avatar Feb 14 '23 07:02 Lielhercogs

If you're getting the NDArray error, try generating an image directly at that resolution (704x832 if I'm reading the above correctly). 2.3.0 fails on MPS at anything above 704x704 with SD 1.5 or 768x768 with SD 2.1. I submitted this as bug #2624.

Adreitz avatar Feb 14 '23 17:02 Adreitz

Correção para o erro em português https://atnhost.com.br/erro-decode-first-stage-no-invokeai/

airton-git avatar Feb 23 '23 00:02 airton-git

The decode_first_stage() issue should be fixed in v2.3.1.post2. The problem with crashing on Macs when generating 768x768 images is a diffusers bug and we're waiting for that team to find a fix.

lstein avatar Feb 28 '23 21:02 lstein

There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.

github-actions[bot] avatar Mar 15 '23 06:03 github-actions[bot]