InvokeAI
InvokeAI copied to clipboard
[bug]: When RAM cache is too low, LoRAs are not applied correctly
Is there an existing issue for this problem?
- [X] I have searched the existing issues
Operating system
Linux
GPU vendor
Nvidia (CUDA)
GPU model
No response
GPU VRAM
24
Version number
4.2.1
Browser
FF
Python dependencies
No response
What happened
When the RAM cache is set too low (presumably a value too low to store a LoRA), the LoRA may not be applied correctly.
The following examples have the same prompt and seed - only the RAM cache and LoRA settings are changed.
Base test case metadata:
{
"generation_mode": "sdxl_txt2img",
"positive_prompt": "super cute tiger cub alienzkin",
"negative_prompt": "",
"width": 1024,
"height": 1024,
"seed": 3944440447,
"rand_device": "cpu",
"cfg_scale": 5.5,
"cfg_rescale_multiplier": 0,
"steps": 50,
"scheduler": "dpmpp_2m_sde_k",
"model": {
"key" : "e790edfe-1614-48fa-9802-58b83c0159b7" ,
"hash": "random:6cab136d48faf77462cab64f1f810971f1a6b925c94f1c6890bf8ae748936177",
"name": "Juggernaut-XL-v9" ,
"base": "sdxl" ,
"type": "main"
},
"loras": [
{
"model": {
"key" : "3c106f7a-cdbc-4445-b11c-3915ac5886ee" ,
"hash": "random:bbebed7dd0694eeb822c9b7fac85d01f49738cf59f0a1c2b7116a38f8409257f",
"name": "alienzkin-sdxl" ,
"base": "sdxl" ,
"type": "lora"
},
"weight": 0.75
}
],
"positive_style_prompt": "super cute tiger cub alienzkin",
"negative_style_prompt": "",
"control_layers": {"layers": [], "version": 2},
"app_version": "4.2.1"
}
LoRA disabled (any ram setting)
ram: 7.5, LoRA enabled
ram: 0.25, LoRA enabled
What you expected to happen
LoRAs still work when the RAM cache setting is low.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response