jan
jan copied to clipboard
feat: add a simple way to convert Hugging Face model to GGUF
Note: Join Jan's Discord server and go to roadmap -> Jan can convert Hugging Face models to GGUF for more frequent updates.
Describe Your Changes
- Add a way to convert Hugging Face models to GGUF in Jan
- (Hopefully) make decent metadata from Hugging Face's model card
Fixes Issues
- None
Self Checklist
- [ ] Added relevant comments, esp in complex areas
- TODO
- [ ] Updated docs (for bug fixes / features)
- TODO
- [ ] Created issues for follow-up changes or refactoring needed
TODO (yeah I just made this up)
- All of the above
- Have a better UI (we really need it, the current UI that I made is um, filthy)
- Alert when the user doesn't have Python (check below)
- Build and bundle llama.cpp's quantize binary(like we do for nitro)
Things to know (again I added this)
- User should have Python installed locally
- Long explanation: This change uses llama.cpp's convert script to convert models, and since the script is written in Python, we need it to run the script.
- For easy debugging, it doesn't remove the model directory when the conversion has failed. Later we should uncomment it before publishing.