LIwii1

Results 2 comments of LIwii1

> Hi [@LIwii1](https://github.com/LIwii1) I think this error is caused by out of GPU memory. Deepseek-r1:8b-0528-qwen3-fp16 model itself requires 16GB. You could try other precision instead. deepseek-r1:8b-llama-distill-fp16 can run on earlier...

Deepseek-r1:8b-0528-qwen3-q4_K_M can be run, but the model performance drops significantly, so we should try the FP16 version. In addition, the GEMMA3 12B QAT version cannot be run.