llama.cpp
llama.cpp copied to clipboard
Eval bug: does llama.cpp support Intel AMX instruction? how to enable it
Name and Version
llama-cli
Operating systems
Linux
GGML backends
AMX
Hardware
XEON 8452Y + NV A40
Models
No response
Problem description & steps to reproduce
as title
First Bad Commit
No response
Relevant log output
as title