malv-c

Results 5 issues of malv-c

both llama.cpp with : % cmake .. -DLLAMA_CUBLAS=ON -DLLAMA_CUDA_DMMV_F16=ON -DLLAMA_CUDA_DMMV_Y=16 and in koboldcpp with : % cmake .. -DLLAMA_CUBLAS=1 give : ggml.h(218): error: identifier "__fp16" is undefined

on jjetson orin agx your framewrk is not buildable

context changed to "assembly on linux" --[ Trial 0 ]----------------- assembly on linux-amd64.asm.d. This assembler uses amd64 compiler to build the assembly. In the command line: mvn clean install In...

llama-cpp/ggml.h(218): error: identifier "__fp16" is undefined i request exllama anyway ( best loader now )

issue:bug
os:linux

File "/home/void/.local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/home/void/.local/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/models/llama-2-7b-chat.ggmlv3.q8_0.bin/revision/main