fast-llama
fast-llama copied to clipboard
Failed to load model
./main -f gguf -c ../text-generation-webui/models/beagle14-7b.Q5_K_M.gguf
ERROR: [./src/model_loaders/gguf_loader.cpp:263] [load_gguf()] Unsupported file type:17
Failed to load model
i think u need to use Q8 because it does not support Q5 yet
but i have problem with the tokenizer setting for mistral 7b. not sure how to do it too.