InvokeAI
InvokeAI copied to clipboard
Invoke 2.3 Installs and starts normally, but crashes as soon as starting to use it
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
macOS
GPU
amd
VRAM
64 Gb
What happened?
Invoke 2.3 Installs and starts normally, but crashes as soon as starting to use it: home@Einarss-MacBook-Pro ~ % /Users/home/invokeai/MyAi.sh
Starting the InvokeAI browser-based UI..
- Initializing, be patient...
Initialization file /Users/home/invokeai/invokeai.init found. Loading... Internet connectivity is True InvokeAI, version 2.3.0 InvokeAI runtime directory is "/Users/home/invokeai" GFPGAN Initialized CodeFormer Initialized ESRGAN Initialized Using device_type mps xformers not installed Current VRAM usage: 0.00G Loading diffusers model from runwayml/stable-diffusion-v1-5 | Using more accurate float32 precision | Loading diffusers VAE from stabilityai/sd-vae-ft-mse | Using more accurate float32 precision Fetching 15 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 43842.90it/s] | Default image dimensions = 512 x 512 Model loaded in 4.50s Textual inversions available: Setting Sampler to k_lms (LMSDiscreteScheduler)
- --web was specified, starting web server...
- Initializing, be patient...
Initialization file /Users/home/invokeai/invokeai.init found. Loading... Started Invoke AI Web Server! Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address. Point your browser at http://127.0.0.1:9090 System config requested patchmatch.patch_match: INFO - Compiling and loading c extensions from "/Users/home/invokeai/.venv/lib/python3.9/site-packages/patchmatch". patchmatch.patch_match: ERROR - patchmatch failed to load or compile (Command 'make clean && make' returned non-zero exit status 2.). patchmatch.patch_match: INFO - Refer to https://invoke-ai.github.io/InvokeAI/installation/060_INSTALL_PATCHMATCH/ for installation instructions. Patchmatch not loaded (nonfatal)
Image Generation Parameters:
{'prompt': 'Oil painting of beautiful flowers, palette knife style [poorly drawn, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, blurry, bad anatomy, blurred, watermark, grainy, signature, cut off, draft, unwanted, distorted, grotesque, chaotic, misaligned, smudged, mutilated, asymmetrical, pixelated, low-resolution, unnatural, off-balance, poorly rendered, over-exposed, grainy, dark, sketchy, distorted features, mismatched, out of proportion, scribbled, botched.\n]', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 1024, 'width': 1024, 'sampler_name': 'k_lms', 'seed': 1218858731, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'unifiedCanvas', 'init_mask': 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA8AAAAOACAYAAAD1jh...', 'fit': False, 'strength': 0.75, 'invert_mask': False, 'bounding_box': {'x': -192, 'y': -192, 'width': 960, 'height': 896}, 'init_img': 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA8AAAAOACAYAAAD1jh...', 'inpaint_width': 960, 'inpaint_height': 896, 'seam_size': 96, 'seam_blur': 16, 'seam_strength': 0.7, 'seam_steps': 30, 'tile_size': 32, 'infill_method': 'tile', 'force_outpaint': False, 'variation_amount': 0}
ESRGAN Parameters: False Facetool Parameters: False using provided input image of size 960x896 This input is larger than your defaults. If you run out of memory, please use a smaller image. using provided input image of size 960x896 Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/home/invokeai/.venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_lms_discrete.py:268: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] /AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:705: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: product of dimension sizes > 2**31' /Users/home/invokeai/MyAi.sh: line 37: 17572 Abort trap: 6 .venv/bin/python .venv/bin/invoke.py --web $* home@Einarss-MacBook-Pro ~ % /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
Screenshots
No response
Additional context
I have even done a complete Mac OS system reinstall with new Python install, as per instructions. Does not help. Crashes all the time. Previous version 1.5 was not that vulnerable.
Contact Details
/AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:705: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: product of dimension sizes > 2**31
This issue is probably related to the use of the diffuser model, because you are requesting a size of greater of 768x768... how I reported in the #2444
as workaround don't use diffuser models for now, instead use .ckpt or .safetensors model
please can you confirm that using .ckpt or .safetensors model the example work ?
@psychedelicious
/Users/ivano/Code/Ai/@Stuffs/invokeai.models/.venv/lib/python3.10/site-packages/ldm/modules/embedding_manager.py:146: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
is not related with the setting of PYTORCH_ENABLE_MPS_FALLBACK=1
environment variable.
I have this variable set in my environment and I get this error, also note that this is just annoyed message but performance are not affected, and it is a new message this as always be the case on the Mac ...
@Lielhercogs
I remarked that you have also patchmatch issues. To solve this issue you have to install opencv
package on Mac
brew install opencv
If brew is not installed on you machine, you must install brew package manager before https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/060_INSTALL_PATCHMATCH.md
Yes, I had misread the error message.
Thank you all very much!
I will try everything. I need to get Invoke running. I have even performed a factory reset and clean install of my Mac.
Now, I would first install OPENCV as suggested, then will attempt running ./.venv/bin/activate PYTORCH_ENABLE_MPS_FALLBACK=1 invokeai --web
Is that correct?
Einars
On 16 Feb 2023, at 10:06, psychedelicious @.***> wrote:
Yes, I had misread the error message.
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1432677526, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTJOVDVT2YBQJOFB4RLWXXNYRANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
By the way, how do you exit the InvokeAI normally? Without just brutally closing and terminating the Terminal?
On 16 Feb 2023, at 10:06, psychedelicious @.***> wrote:
Yes, I had misread the error message.
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1432677526, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTJOVDVT2YBQJOFB4RLWXXNYRANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
the only way to leave the terminal open is to use CTRL-C AFAIK
you can try to use .ckpt version of the openjourney
model instead of the diffuser one that is installed with the official installer,
you can download .ckpt and .safetensor models on CIVITAI
https://civitai.com/models/86/openjourney
Dear Ivan,
I installed OpenCV, but still have Ptchmatch issues:
@.*** ~ % cd /Users/home/invoke23/
@.*** invoke23 % source ./.venv/bin/activate
(.venv) @.*** invoke23 % invokeai --web
- Initializing, be patient...
Initialization file /Users/home/invoke23/invokeai.init found. Loading... Internet connectivity is True InvokeAI, version 2.3.0 InvokeAI runtime directory is "/Users/home/invoke23" GFPGAN Initialized CodeFormer Initialized ESRGAN Initialized Using device_type mps xformers not installed Current VRAM usage: 0.00G Loading diffusers model from runwayml/stable-diffusion-v1-5 | Using more accurate float32 precision | Loading diffusers VAE from stabilityai/sd-vae-ft-mse | Using more accurate float32 precision Fetching 15 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 187245.71it/s] | Default image dimensions = 512 x 512 Model loaded in 4.06s Textual inversions available: Setting Sampler to k_lms (LMSDiscreteScheduler)
- --web was specified, starting web server...
- Initializing, be patient...
Initialization file /Users/home/invoke23/invokeai.init found. Loading... Started Invoke AI Web Server! Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address. Point your browser at http://127.0.0.1:9090 System config requested patchmatch.patch_match: INFO - Compiling and loading c extensions from "/Users/home/invoke23/.venv/lib/python3.10/site-packages/patchmatch". patchmatch.patch_match: ERROR - patchmatch failed to load or compile (Command 'make clean && make' returned non-zero exit status 2.). patchmatch.patch_match: INFO - Refer to https://invoke-ai.github.io/InvokeAI/installation/060_INSTALL_PATCHMATCH/ for installation instructions. Patchmatch not loaded (nonfatal)
On 16 Feb 2023, at 10:01, Ivano Coltellacci @.***> wrote:
@Lielhercogs https://github.com/Lielhercogs I remarked that you have also patchmatch issues. To solve this issue you have to install opencv package on Mac
brew install opencv
If brew is not installed on you machine, you must install brew package manager before
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1432671826, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTKMDJROAJG4SO5MSRLWXXNGJANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
Dear All!
I have installed recommended models. I have only two diffuser models available currently, do not immediately know how to install .ckpt ones in Invoke 2.3
Run the Invoke. It did render some pictures in default size all right, but crashed Python when attempted to slightly increase size. I see this situation quite often:
Image Generation Parameters:
{'prompt': 'Palette knife oil painting of flowers [poorly drawn, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, blurry, bad anatomy, blurred, watermark, grainy, signature, cut off, draft, unwanted, distorted, grotesque, chaotic, misaligned, smudged, mutilated, asymmetrical, pixelated, low-resolution, unnatural, off-balance, poorly rendered, over-exposed, grainy, dark, sketchy, distorted features, mismatched, out of proportion, scribbled, botched.]', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 832, 'width': 1024, 'sampler_name': 'k_lms', 'seed': 1291180436, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}
ESRGAN Parameters: False Facetool Parameters: False Generating: 0%| | 0/1 [00:00<?, ?it/s/AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:724: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 232' zsh: abort invokeai --web (.venv) @.* invoke23 % @.***/3.10.10/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
On 16 Feb 2023, at 09:43, Ivano Coltellacci @.***> wrote:
/AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:705: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: product of dimension sizes > 2**31
This issue is probably related to the use of the diffuser model, because you are requesting a size of greater of 768x768... how I reported in the #2444 https://github.com/invoke-ai/InvokeAI/issues/2444 as workaround don't use diffuser models for now, instead use .ckpt or .safetensors model
please can you confirm that using .ckpt or .safetensors model the example work ?
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1432652878, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTLTXJF77FJZXKS27BLWXXLCDANCNFSM6AAAAAAU4Q6224. You are receiving this because you authored the thread.
@Lielhercogs You have everything installed and working correctly. The problem you are running into is a work in progress; we do not have a solution at this time. As @i3oc9i suggested, using ckpt models may help.
The particular problem only manifests with images of certain dimensions. You can try different sizes as a workaround.
Here is the guide for installing models, including ckpt models: https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/
PatchMatch is used in the canvas when outpainting. Installation on macOS has been difficult for many users; just installing opencv does not always work. We need to do more investigation on this, but the good news is you do not need it to use the canvas. It will use a backup method which still produces good results.
for patchmatch try to do this, probably you are missing the Xcode compiler
xcode-select --install
Thank you very much! I will be waiting for updates on your good work. In the meantime I will try everything suggested and hope some of it works.
Best regards, Einars
On 16 Feb 2023, at 11:20, psychedelicious @.***> wrote:
@Lielhercogs https://github.com/Lielhercogs You have everything installed and working correctly. The problem you are running into is a work in progress; we do not have a solution at this time. As @i3oc9i https://github.com/i3oc9i suggested, using ckpt models may help.
The particular problem only manifests with images of certain dimensions. You can try different sizes as a workaround.
Here is the guide for installing models, including ckpt models: https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/
PatchMatch is used in the canvas when outpainting. Installation on macOS has been difficult for many users; just installing opencv does not always work. We need to do more investigation on this, but the good news is you do not need it to use the canvas. It will use a backup method which still produces good results.
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1432769418, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTP24IN7NADODFXEVJLWXXWPXANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
I already have the X-code command line tools installed. Do I still need to run this line: xcode-select —install ? Is it some way different from usual install?
Einars
On 16 Feb 2023, at 11:23, Ivano Coltellacci @.***> wrote:
try to do this
xcode-select --install
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1432773449, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTLBOZSUDTZKRAEEGW3WXXW2DANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
it will not do anything if all is already properly installed, just give a try
Hello!
Invoke indeed displays more stability, whet using only manually installed .ckpt models, but not a 100% one. Python still occasionally crashes when increasing the size for the image to be generated.
I was successfully able to add some v1 models, as V1-4-full-ema, with config file: v1-inference.yaml, and the v1-5-inpainting mpdel with v1-inpainting-inference.yaml, but was unsuccessful loading 768-v-ema, or any else from 768 series with config file v2-inference-v.yaml. What is to be done different while manually adding 768 series or any other v2 model? 768-v-ema: description: 768-v-ema config: configs/stable-diffusion/v2-inference-v.yaml weights: sd1/768-v-ema.ckpt vae: None width: 768 height: 768 format: ckpt
By the way, as I have thrown out all initial diffuser models, Invoke complains that: ** "stable-diffusion-1.5" is not a known model name; falling back to stable-diffusion-1.5. Indeed, I do not have model called that name. Where to specify my existing model names to be opened after start?
Best regards, Einars
On 16 Feb 2023, at 11:20, psychedelicious @.***> wrote:
@Lielhercogs https://github.com/Lielhercogs You have everything installed and working correctly. The problem you are running into is a work in progress; we do not have a solution at this time. As @i3oc9i https://github.com/i3oc9i suggested, using ckpt models may help.
The particular problem only manifests with images of certain dimensions. You can try different sizes as a workaround.
Here is the guide for installing models, including ckpt models: https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/
PatchMatch is used in the canvas when outpainting. Installation on macOS has been difficult for many users; just installing opencv does not always work. We need to do more investigation on this, but the good news is you do not need it to use the canvas. It will use a backup method which still produces good results.
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1432769418, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTP24IN7NADODFXEVJLWXXWPXANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
Could you please drag and drop your whole models.yaml file here? BTW, I don't think the full EMA models are advisable.
Here you are! What models would you suggest for generation of big - painting like artistic pictures?
On 17 Feb 2023, at 11:16, psychedelicious @.***> wrote:

Could you please drag and drop your whole models.yaml file here? BTW, I don't think the full EMA models are advisable.
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1434357895, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTOE45K4BQPAQH6JFWLWX46WPANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
This file describes the alternative machine learning models
available to InvokeAI script.
To add a new model, follow the examples below. Each
model requires a model config file, a weights file,
and the width and height of the images it
was trained on.
trinart-characters-2_0: description: An SD model finetuned with 19.2M anime/manga style images (ckpt version) (4.27 GB) repo_id: naclbit/trinart_derrida_characters_v2_stable_diffusion format: ckpt width: 512 height: 512 weights: models/ldm/stable-diffusion-v1/derrida_final.ckpt config: /Users/home/invokeai/configs/stable-diffusion/v1-inference.yaml vae: models/ldm/stable-diffusion-v1/autoencoder_fix_kl-f8-trinart_characters.ckpt ft-mse-improved-autoencoder-840000: description: StabilityAI improved autoencoder fine-tuned for human faces. Improves legacy .ckpt models (335 MB) repo_id: stabilityai/sd-vae-ft-mse-original format: ckpt width: 512 height: 512 weights: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt config: /Users/home/invokeai/configs/stable-diffusion/VAE/default trinart_vae: description: Custom autoencoder for trinart_characters for legacy .ckpt models only (335 MB) repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1 format: ckpt width: 512 height: 512 weights: models/ldm/stable-diffusion-v1/autoencoder_fix_kl-f8-trinart_characters.ckpt config: /Users/home/invokeai/configs/stable-diffusion/VAE/trinart sd-v1-4-full-ema: description: sd-v1-4-full-ema config: configs/stable-diffusion/v1-inference.yaml weights: sd1/sd-v1-4-full-ema.ckpt width: 512 height: 512 format: ckpt default: false v1-5-pruned: description: v1-5-pruned config: configs/stable-diffusion/v1-inference.yaml weights: sd1/v1-5-pruned.ckpt width: 512 height: 512 format: ckpt default: false v1-5-pruned-emaonly: description: v1-5-pruned-emaonly config: configs/stable-diffusion/v1-inference.yaml weights: sd1/v1-5-pruned-emaonly.safetensors width: 512 height: 512 format: ckpt default: false sd-v1-4: description: sd-v1-4 config: configs/stable-diffusion/v1-inference.yaml weights: sd1/sd-v1-4.ckpt width: 512 height: 512 format: ckpt default: false sd-v1-5-inpainting: description: sd-v1-5-inpainting config: configs/stable-diffusion/v1-inpainting-inference.yaml weights: sd1/sd-v1-5-inpainting.ckpt vae: None width: 512 height: 512 format: ckpt 768-v-ema: description: 768-v-ema config: configs/myv2-diffusion-inference.yaml weights: sd2/768-v-ema.ckpt vae: None width: 768 height: 768 format: ckpt stable-diffusion-1.5: description: stable-diffusion-1.5 config: configs/stable-diffusion/v1-inference.yaml weights: sd1/v1-5-pruned.ckpt width: 512 height: 512 format: ckpt default: false
Sorry, but can you please upload the file itself?
I tried, but it says that does not support that kind of file... How?! Attempted to upload with added.jpeg
On 17 Feb 2023, at 11:38, psychedelicious @.***> wrote:
Sorry, but can you please upload the file itself?
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1434382243, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTOGSSH4TGFGOE5M6ULWX5BKDANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
Please instruct how I can get my models.yaml file to you. This chat does not allow to attach or upload .yaml files.
On 17 Feb 2023, at 11:38, psychedelicious @.***> wrote:

Sorry, but can you please upload the file itself?
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1434382243, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTOGSSH4TGFGOE5M6ULWX5BKDANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
Hello! Invoke indeed displays more stability, whet using only manually installed .ckpt models, but not a 100% one. Python still occasionally crashes when increasing the size for the image to be generated.....
I remarked that you have an AMD graphic card. The size of the image is limitated by the memory available on your video card, and other factors on M1 architectures.
I have a mac studio ultra (M1) and I can generates images up to 960x960 pixels because how Pythorch is implemented on top of Metal (the CUDA equivalent on MacOs), and probably on how InvokeAi is using PyThorch and and other libraries, because with A1111 I can go a bit further.
Anyway, IMHO, it's not very useful to try to generate larger images, especially if you are interested in portraits and body gesture and/or attitude, because the available models were trained on small images, and often on such topics, you end up with double faces and/or malformed anotomy... Better is to upscale the image after is geneterated if you need.
I have Apple MacBook Pro M2 max, specially obtained to play with large size AI applications, including image, music and video generations. Currently just learning how to play with different software and various available applications. Not easy to learn new tricks at fast pace :) InvokeAI looks very promising.
On 17 Feb 2023, at 14:38, Ivano Coltellacci @.***> wrote:
Hello! Invoke indeed displays more stability, whet using only manually installed .ckpt models, but not a 100% one. Python still occasionally crashes when increasing the size for the image to be generated.....
I remarked that you have an AMD graphic card. The size of the image is limitated by the memory available on your video card, and other factors on M1 architectures.
I have a mac studio ultra (M1) and I can generates images up to 960x960 pixels because how Pythorch is implemented on top of Metal (the CUDA equivalent on MacOs).
Anyway, IMHO, it's not very useful to try to generate larger images, especially if you are interested in portraits and body gesture and/or attitude, because the available models were trained on small images, and often on such topics, you end up with double faces and/or malformed anotomy,,,,
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1434583205, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTKO3YHBB4B3PAWYFXLWX5WKVANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
I have Apple MacBook Pro M2 max,
You should report M2 instead of AMD as GPU information in the issue, then :)
The selection dialog on GitHub does not provide that option, but you have to tick something to proceed.. :)
On 17 Feb 2023, at 18:10, Ivano Coltellacci @.***> wrote:
I have Apple MacBook Pro M2 max,
You should report M2 instead of AMD as GPU information in the issue, then :)
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/2669#issuecomment-1434859469, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOPTTN2VKGXKAM2FOVEDS3WX6PF3ANCNFSM6AAAAAAU4Q6224. You are receiving this because you were mentioned.
The selection dialog on GitHub does not provide that option, but you have to tick something to proceed.. :) …
yes sorry you right, in case of M1/M2 architecture you should indicate mps
MPS is a library of metal-based, high-performance, GPU-accelerated of M1/M2 chip
FYI, AMD
is used by Apple for Intel based machine
@Lielhercogs Your models.yml looks a bit odd. Please try to run invokeai-configure and "redownload" the models. The script will actually realize that the models already exist and won't re-download them, but afterwards you get a fresh models.yml, maybe this will solve your issue