Badis

Results 14 issues of Badis

Hello, https://huggingface.co/Qwen/Qwen1.5-14B-Chat-GGUF It doesn't seem to work with llamacpp_hf (for the vanilla llamacpp it works fine though) ### Reproduction Try Qwen1.5 GGUF on llamacpp_hf and use the ChatML prompt format...

bug

Hello, I noticed something interesting, when you start the very first generation of a prompt (meaning the model is doing the prompt processing calculation), you'll get a different logit compared...

bug

Hello, The grammar feature has a problem with this particular model: https://huggingface.co/Envoid/SensualNousInstructDARETIES-CATA-LimaRP-ZlossDT-SLERP-8x7B I used this quant: https://huggingface.co/Artefact2/SensualNousInstructDARETIES-CATA-LimaRP-ZlossDT-SLERP-8x7B-GGUF/blob/main/SensualNousInstructDARETIES-CATA-LimaRP-ZlossDT-SLERP-8x7B-Q5_K_S.gguf ### Logs ```shell Traceback (most recent call last): File "D:\text-generation-webui\modules\callbacks.py", line 61, in...

bug

Hey, I tried using this fork and I realized that the speed was really slow for some models that I was using https://huggingface.co/reeducator/vicuna-13b-cocktail/tree/main For vicuna-cocktail for example I get something...