Evan

Results 9 comments of Evan

Hi, thank you for your help! However, after following your suggestions, I encountered a runtime error when executing model.save_pretrained_gguf(). ``` /bin/sh: 1: python: not found Unsloth GGUF:hf-to-gguf:Loading model: gemma-3 Unsloth...

Thank you all for your help, but I'm still running into an error. ``` model.save_pretrained_merged("gemma-3_merged", tokenizer) !git clone https://github.com/ggml-org/llama.cpp.git !python ./llama.cpp/convert_hf_to_gguf.py ./gemma-3_merged --outfile ./gemma3-4b-it_q8_0.gguf --outtype q8_0 ``` ``` INFO:hf-to-gguf:Loading model:...

I am currently fine-tuning Gemma-3 using the Unsloth library and encountered an error when saving the model. The error occurs during the model.save_pretrained_merged() function call, and the error traceback indicates...

Thank you for your suggestions! I followed the advice to downgrade from torch 2.6.0 to torch 2.5.1, but I’m still encountering the same out-of-memory issue when attempting to save the...

> Out of curiosity, I tested this on the latest version using torch==2.5.1 and had no issues saving (if you load nvtop or similar, you can typically see the VRAM...

Thanks! I'll try adjusting that and see if it helps. As for the code, I followed the Colab example from the README: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb only modified it to use my own...

> Also getting this error, same story. Llama 8B works fine though. I have a temporary solution for now. I ran the same code on Google Colab, but the model.save_pretrained_merged...

I'm starting to suspect that the OOM issue might be caused by running in WSL. Has anyone successfully run it in a similar environment?