llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Bug: exception while rasing a another exception in convert_llama_ggml_to_gguf script

Open farbodbj opened this issue 1 year ago • 0 comments

What happened?

When trying to convert this GGML model from hugging face to GGUF, the script encountered an error in this function but when trying to raise the ValueError it encountered another exception. how I called the python script: python convert_llama_ggml_to_gguf.py --input models/bigtrans-13b.ggmlv3.q6_K --output q6_K as it is obvious an input with wrong data type (int instead of GGMLQuantizationType) has been passed to this function. I fixed this issue in #8928

Name and Version

version: 3535 (1e6f6554)

What operating system are you seeing the problem on?

Linux

Relevant log output

line 22, in quant_shape_from_byte_shape
    raise ValueError(f"Quantized tensor bytes per row ({shape[-1]}) is not a multiple of {quant_type.name} type size ({type_size})")
                                                                                          ^^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'name'

farbodbj avatar Aug 08 '24 10:08 farbodbj