fairseq icon indicating copy to clipboard operation
fairseq copied to clipboard

I tried to convert the model to gguf via llama.cpp

Open Dovegs opened this issue 7 months ago • 0 comments

Converted via hf. Is it possible to convert the model to gguf and quantize it?

input:

python convert_hf_to_gguf.py ../nllb-200-tf --outfile nnlb-200.gguf --outtype f16

output:

ERROR:hf-to-gguf:Model M2M100ForConditionalGeneration is not supported

Dovegs avatar Apr 24 '25 04:04 Dovegs