Zero

Results 91 comments of Zero

> @ZeroCool22 is this from the catalog on the home page or from search results? Search. ![Screenshot_4](https://github.com/lmstudio-ai/lms/assets/13344308/2f0d4238-5378-4d51-b65c-f992067504a8)

> Facing exactly the same issue. Any news on that yet? Chaning onnx version didn't help, tried all kinds of combinations. Version: v1.10.1 Python: 3.10.6 CUDA: 12.1 > > Current...

![Screenshot_4](https://github.com/user-attachments/assets/e473f309-1d04-4685-b60f-697f2a1805c8)

> I answered this on reddit. Would close but I can't. But that is true, it could cause degradation i mean, the GPU is really in risk?

> maybe try to choose the "Diffusion in Low Bits" on top not automatic but 16bit ! It's already selected, check the image again.

![Screenshot_3](https://github.com/user-attachments/assets/10f048da-3c65-48a2-8920-089c393a9ff3) @lllyasviel Why the patching is so damn slow? Isn't supposed if we use **Automatic (fp16 LoRA)** the patching must be **almost instantly**?

> 2 weeks ago for me worked if i reduce in top line the "GPU weight" -> 7000 MB GGUF and LORA isnt yet very well programmed .... BE PATIENT...

> reduce in top line the "GPU weight" -> 7000 MB GGUF But doing that, you are not taking the benefit of all your VRAM (dunno what GPU you have).

> 16GB RTX and if you read the CMD lines it tryes to free/clean the vram ... for what ever ... and if that fails it lora-patch was slow ......