malv-c
malv-c
both llama.cpp with : % cmake .. -DLLAMA_CUBLAS=ON -DLLAMA_CUDA_DMMV_F16=ON -DLLAMA_CUDA_DMMV_Y=16 and in koboldcpp with : % cmake .. -DLLAMA_CUBLAS=1 give : ggml.h(218): error: identifier "__fp16" is undefined
on jjetson orin agx your framewrk is not buildable
context changed to "assembly on linux" --[ Trial 0 ]----------------- assembly on linux-amd64.asm.d. This assembler uses amd64 compiler to build the assembly. In the command line: mvn clean install In...
llama-cpp/ggml.h(218): error: identifier "__fp16" is undefined i request exllama anyway ( best loader now )
File "/home/void/.local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/home/void/.local/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/models/llama-2-7b-chat.ggmlv3.q8_0.bin/revision/main
### What is the issue? % curl -fsSL https://ollama.com/install.sh | sh >>> Installing ollama to /usr/local Password: >>> Downloading Linux amd64 bundle ######################################################################## 100.0% >>> Downloading Linux ROCm amd64 bundle...
robotic should include it is auto-calculating mechanical for accurate robots/vehicules move planned ? for example i plan open-source projects in all those field with mechanical parts made in openscad and...
ollama api_key is ? enable many models/producers nexa sdk