unsloth
unsloth copied to clipboard
Kaggle tweaks
I was getting this on kaggle
make: *** No rule to make target 'make'. Stop.
make: *** Waiting for unfinished jobs....
I'm not sure if you can even do !cd (try doing !pwd after) or chaining like !cd && make on kaggle
However, doing it this way compiled correctly
!git clone https://github.com/ggerganov/llama.cpp
!make clean -C llama.cpp
!make all -j -C llama.cpp
Hopefully this is the right way to do it in python as well, give it a test.
@h-a-s-k Oh my I did not notice Kaggle had this issue whoops - thanks let me check first and get back to you! Thanks again!
@h-a-s-k Thanks for the PR again! Apologies on the delay!
I had to edit Kaggle drastically - it seems like due to a 20GB disk limit, one can only save to 16bit via model.save_pretrained_merged and conversion to GGUF is probably not possible, even with q8. 16bit 7b models generally take 16GB ish, so it might fit a 8GB q8, but unlikely.
I added your suggestions, but it seems like Kaggle will require a 2 step procedure - ie merge to 16bit then use llama.cpp manually via https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf
Again thanks for the PR - I'll credit you in the next release!