llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Misc. bug: Parameter `--model-name` is being ignored in `convert_hf_to_gguf.py` code

Open hdnh2006 opened this issue 11 months ago • 1 comments

Name and Version

./llama-cli --version
version: 4707 (bd6e55bf)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

Python/Bash scripts

Command line

python convert_hf_to_gguf.py models/Llama3-OpenBioLLM-8B/ --model-name Llama3-OpenBioLLM-8B-F16.gguf

Problem description & steps to reproduce

Hello! It looks like the code convert_hf_to_gguf.py is ignoring the parameter --model-name when you try to convert Llama based models. I am trying to convert this model running the following:

python convert_hf_to_gguf.py models/Llama3-OpenBioLLM-8B/ --model-name Llama3-OpenBioLLM-8B-F16.gguf

And no matter what I do, the filename is always Meta-Llama-3-8B-F16.gguf.

Maybe your code is identifying I am trying to convert a Llama-3-8B based model and it assings automatically the model name.

Could you please check this?

Thanks in advance.

First Bad Commit

No response

Relevant log output

All runs good, but model name should be set.

hdnh2006 avatar Feb 13 '25 13:02 hdnh2006

And no matter what I do, the filename is always Meta-Llama-3-8B-F16.gguf.

You can set the name of the output file using --outfile options:

...
options:
  -h, --help            show this help message and exit
  --vocab-only          extract only the vocab
  --outfile OUTFILE     path to write to; default: based on input. {ftype} will be replaced by the outtype.
...

danbev avatar Feb 13 '25 13:02 danbev