transformers
transformers copied to clipboard
Community contribution: Adding GGUF support for more architectures
Feature request
Recently, we have added the ability to load gguf files within transformers.
The goal was to offer the possibility to users to further train/fine-tune their gguf models.
See Workflow
1) Load gguf file in transformers: we dequantize the weights to fp32, then we load the weights to be used with PyTorch.from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF"
filename = "tinyllama-1.1b-chat-v1.0.Q6_K.gguf"
tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename)
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)
-
train/finetune
-
Convert the model back to gguf to use in the ggml ecosystem using convert_hf_to_gguf script or using gguf-my-repo space if you pushed your model on the hub :
tokenizer.save_pretrained('directory')
model.save_pretrained('directory')
!python ${path_to_llama_cpp}/convert-hf-to-gguf.py ${directory}
Let's try to add GGUF support for more architectures! Currently supported architectures are
- [x] Llama
- [x] Mistral
- [x] Qwen2
It would be great to add the support for more architectures such as
- [x] Phi3 https://github.com/huggingface/transformers/pull/31844
- [x] Qwen2Moe https://github.com/huggingface/transformers/pull/33264
- [x] Gemma2
- [x] T5 https://github.com/huggingface/transformers/pull/33389
- [x] Falcon https://github.com/huggingface/transformers/pull/33437
- [x] Bloom https://github.com/huggingface/transformers/pull/33473
- [x] StableLM https://github.com/huggingface/transformers/pull/33793
- [x] gpt2 https://github.com/huggingface/transformers/pull/34044
- [x] starcoder2 https://github.com/huggingface/transformers/pull/34094
- [ ] llama4
- [ ] Deepseekv3
- [ ] c4ai-command-a
... and many more (Feel free to suggest more architectures ! The model needs to integrated in transformers)
Adding this feature would require to follow the same protocol as in this PR :
- Update
GGUF_TENSOR_MAPPINGandGGUF_CONFIG_MAPPINGin order to map the tensor/config of the gguf file to the one on transformers. - Create a
GGUFXXXConverter(XXXConverter)class to convert the gguf tokenizer to a transformers one. - Write tests
If you are interested to take up the challenge, comment below with the architecture name you want to integrate and open a PR!
Once you open a PR, feel free to ping @SunMarc @LysandreJik @ArthurZucker for a review !
Motivation
Support for more gguf models
Your contribution
Reviewing PRs and possibly adding the support for more models
@SunMarc I am going to take Qwen2Moe
@SunMarc I want to take Gemma2
@SunMarc May I suggest & take T5? Seems GGUF version of T5 encoder is highly used for getting along with FLUX.
@SunMarc Hello! Unless someone else is working on this model already, may I take MiniCPM-V?
@SunMarc May I suggest & take T5? Seems GGUF version of T5 encoder is highly used for getting along with FLUX.
Added @junejae !
@SunMarc Hello! Unless someone else is working on this model already, may I take MiniCPM-V?
Hi @010kim, thanks for the interest ! MiniCPM-V model relies on trust_remote_code=True, so I don't think we can add this model for now with gguf support. We don't want to have code in transformers that relies on modeling files that are on the hub. I will think about extending trust_remote_code=True to gguf support, so that the author of the model can add it himself !
Hi @010kim, thanks for the interest ! MiniCPM-V model relies on
trust_remote_code=True, so I don't think we can add this model for now with gguf support. We don't want to have code in transformers that relies on modeling files that are on the hub. I will think about extendingtrust_remote_code=Trueto gguf support, so that the author of the model can add it himself !
@SunMarc Thank you so much for your response. It also makes sense the author should work on it. What about Cohere? Can I take it?
Hi @SunMarc 👋🏻 May I work with CLIP model if nobody is working on it?
Hey @jungnerd ! The model you choose needs to be in the conversion script from hf to gguf. See the script here
Hey @SunMarc 🙋♂️ I'd like to try my chance to contribute to this issue, can I take Falcon? 🦅
Hi @SunMarc, I take bloom if nobody is working on it
Hi @SunMarc, I'd like to handle the work related to Codestrall :)
Hey @jungnerd ! The model you choose needs to be in the conversion script from hf to gguf. See the script here
There is conversion script for clip model(clip.cpp). Can I use this to contribute?
Hi @SunMarc, I'm interested in this issue. Would it be okay if I worked on the BLIP model?
Hi @SunMarc, I'm interested in this issue. Would it be okay if I worked on the BLIP model?
Hi @SunMarc, I'd like to work on the BLIP model, but after researching, I found that it might be challenging due to the Vision model structure. Would it be alright if I switched to working on the Smol model instead?
Hey @SunMarc 🤗 Gonna continue with granite 🪨
@SunMarc I checked the Smol model and confirmed that it's already functioning well without needing any further work. In the issue mentioned that supporting the Smol model would be beneficial, but is there any specific work required?
If not, I’ll proceed with switching to the dbrx model.
@SunMarc I checked the Smol model and confirmed that it's already functioning well without needing any further work. In the issue mentioned that supporting the Smol model would be beneficial, but is there any specific work required?
If not, I’ll proceed with switching to the dbrx model.
Oh indeed, this is because it is a llama architecture.
Hi @SunMarc! I am going to start working on StableLM model
Is any work being done on the Gemma2? If not, I would like to proceed with it! @SunMarc @KingNish24
Hi @SunMarc! I suppose GPT2 gguf is not supported yet, if this is a case, I'll take it
Hi @SunMarc, I'd like to handle the work related to Codestrall :)
Codestrall's tokenizer was just llama tokenizer. It looks like I don't need to handle codes of Codestrall
@SunMarc Thank you so much for your response. It also makes sense the author should work on it. What about Cohere? Can I take it?
I went through the codes, and i was able to to load Cohere gguf model, but could not load the tokenizer. This is because Cohere slow tokenizer is not implemented in HuggingFace. (Only FastTokenizer is available for Cohere) Is there a way around to fix this? @SunMarc
Hey @SunMarc! I"ll take Starcoder2 as next model
Hi @SunMarc! I am going to start working on Mamba
Are you still working on Gemma2? @yijun-lee @KingNish24 ? If not, is it possible for me to try working on it? Thank you!
Are you still working on Gemma2? @yijun-lee @KingNish24 ? If not, is it possible for me to try working on it? Thank you!
I’m running behind schedule, but I’m making progress! I’ll handle it.
I’m running behind schedule, but I’m making progress! I’ll handle it.
Glad to know! Then is it possible for me to try working on Nemotron? @SunMarc
Could you please kindly check my PR @SunMarc? Thank you Add Nemotron GGUF Loading Support
anyone working on supporting DeepSeek V3?
Deekpseek V3 is not supported yet in transformers but it will be soon with this PR