InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

resource_tracker.py:224: 1 leaked semaphore objects to clean up at shutdown

Open michaelezra opened this issue 2 years ago • 3 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

macOS

GPU

mps

VRAM

64GB

What happened?

generating 512x1024 image. issue happens specifically at these image dimensions and leads to crash.

Image generation requested: {'prompt': 'elephant', 'iterations': 1, 'steps': 20, 'cfg_scale': 18, 'threshold': 0, 'perlin': 0, 'height': 1024, 'width': 512, 'sampler_name': 'ddim', 'seed': 1667739259, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': True, 'variation_amount': 0.4} ESRGAN parameters: False Facetool parameters: False {'prompt': 'elephant', 'iterations': 1, 'steps': 20, 'cfg_scale': 18, 'threshold': 0, 'perlin': 0, 'height': 1024, 'width': 512, 'sampler_name': 'ddim', 'seed': 1667739259, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '', 'seamless': False, 'hires_fix': True, 'variation_amount': 0.4} Setting Sampler to ddim /Users/michaelezra/ai/stable-diffusion/ldm/modules/embedding_manager.py:166: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1659484612588/work/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_rows, placeholder_cols = torch.where( DDIMSampler: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:12<00:00, 1.59it/s] DDIMSampler: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:12<00:00, 1.69it/s] Interpolating from 512x1024 to 512x1024 using DDIM sampling Running DDIMSampler sampling starting at step 5 of 20 (15 new sampling steps) /AppleInternal/Library/BuildRoots/f0468ab4-4115-11ed-8edc-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:724: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 2**32' zsh: abort python scripts/invoke.py --web --model stable-diffusion-1.5 (invokeai) michaelezra@Michaels-MBP stable-diffusion % /Users/michaelezra/miniforge3/envs/invokeai/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

Screenshots

No response

Additional context

No response

Contact Details

No response

michaelezra avatar Dec 03 '22 21:12 michaelezra

this was with the latest released version from main branch

michaelezra avatar Dec 04 '22 17:12 michaelezra

I can confirm I see the same issue on my M1 MBP with 32GB RAM. Tested with 1024x512 and 1024x1024 settings.

hankhan425 avatar Dec 13 '22 00:12 hankhan425

Having the same problem

sarperdag avatar Dec 20 '22 10:12 sarperdag

`/Users/bamboozle/Desktop/Invoke.sh; exit Last login: Wed Jan 4 21:53:39 on ttys001 (base) ➜ ~ /Users/bamboozle/Desktop/Invoke.sh; exit Do you want to generate images using the

  1. command-line
  2. browser-based UI
  3. open the developer console Please enter 1, 2, or 3: 2

Starting the InvokeAI browser-based UI..

  • Initializing, be patient...

Initialization file /Users/bamboozle/invokeai/invokeai.init found. Loading... InvokeAI 2.2.5 InvokeAI runtime directory is "/Users/bamboozle/invokeai" GFPGAN Initialized CodeFormer Initialized ESRGAN Initialized Using device_type mps Current VRAM usage: 0.00G Scanning Model: stable-diffusion-1.5 Model Scanned. OK!! Loading stable-diffusion-1.5 from /Users/bamboozle/invokeai/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt | Forcing garbage collection prior to loading new model | LatentDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.52 M params. | Making attention of type 'vanilla' with 512 in_channels | Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Making attention of type 'vanilla' with 512 in_channels | Using more accurate float32 precision | Loading VAE weights from: /Users/bamboozle/invokeai/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt Model loaded in 10.29s Current embedding manager terms: * Setting Sampler to k_lms

  • --web was specified, starting web server...
  • Initializing, be patient...

Initialization file /Users/bamboozle/invokeai/invokeai.init found. Loading... Started Invoke AI Web Server! Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address. Point your browser at http://127.0.0.1:9090 System config requested patchmatch.patch_match: INFO - Compiling and loading c extensions from "/Users/bamboozle/invokeai/.venv/lib/python3.10/site-packages/patchmatch". patchmatch.patch_match: ERROR - patchmatch failed to load or compile (Command 'make clean && make' returned non-zero exit status 2.). patchmatch.patch_match: INFO - Refer to https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md for installation instructions. Patchmatch not loaded (nonfatal) System config requested Model change requested: inpainting-1.5 Current VRAM usage: 0.00G Offloading stable-diffusion-1.5 to CPU Scanning Model: inpainting-1.5 Model Scanned. OK!! Loading inpainting-1.5 from /Users/bamboozle/invokeai/models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt | Forcing garbage collection prior to loading new model | LatentInpaintDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.54 M params. | Keeping EMAs of 688. | Making attention of type 'vanilla' with 512 in_channels | Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Making attention of type 'vanilla' with 512 in_channels | Using more accurate float32 precision | Loading VAE weights from: /Users/bamboozle/invokeai/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt Model loaded in 36.63s Current embedding manager terms: * Setting Sampler to k_lms Image generation requested: {'prompt': 'dreamlikeart, photorealistic biblical wide angle image of sun king wearing dark high quality textile robes with intricate textile patterns and golden full face mask shaped like a middle age sun illustration sitting on a wooden throne facing the camera and holding a single flower in his hand, surrounded by nature and mountains, digital art, highly detailed, digital painting, hyper realistic photography', 'iterations': 1, 'steps': 25, 'cfg_scale': 10, 'threshold': 0, 'perlin': 0, 'height': 1024, 'width': 768, 'sampler_name': 'k_euler_a', 'seed': 881599536, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'variation_amount': 0} ESRGAN parameters: False Facetool parameters: {'type': 'gfpgan', 'strength': 0.8} {'prompt': 'dreamlikeart, photorealistic biblical wide angle image of sun king wearing dark high quality textile robes with intricate textile patterns and golden full face mask shaped like a middle age sun illustration sitting on a wooden throne facing the camera and holding a single flower in his hand, surrounded by nature and mountains, digital art, highly detailed, digital painting, hyper realistic photography', 'iterations': 1, 'steps': 25, 'cfg_scale': 10, 'threshold': 0, 'perlin': 0, 'height': 1024, 'width': 768, 'sampler_name': 'k_euler_a', 'seed': 881599536, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '', 'seamless': False, 'hires_fix': False, 'variation_amount': 0} Setting Sampler to k_euler_a /Users/bamboozle/invokeai/.venv/lib/python3.10/site-packages/ldm/modules/embedding_manager.py:166: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_rows, placeholder_cols = torch.where( Ksampler using karras noise schedule (steps < 30) Generating: 0%| | 0/1 [00:00<?, ?it/s]>> Sampling with k_euler_ancestral starting at step 0 of 25 (25 new sampling steps) >> Cancel processing requested | 2/25 [00:41<07:59, 20.86s/it] 12%|█████▎ | 3/25 [01:23<10:13, 27.91s/it] Generating: 0%| | 0/1 [01:40<?, ?it/s] Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested Cancel processing requested `

Gitterman69 avatar Jan 04 '23 21:01 Gitterman69