llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

gguf_convert_endian.py: implement byteswapping for q4_k and q6_k

Open AlekseiNikiforovIBM opened this issue 1 month ago • 1 comments

With these changes llama3.2 model could be converted to big endian.

AlekseiNikiforovIBM avatar Jan 22 '25 13:01 AlekseiNikiforovIBM