llama.cpp
llama.cpp copied to clipboard
Eval bug: GGML_SCHED_MAX_BACKENDS assert error
Name and Version
latest version
Operating systems
Linux
GGML backends
CUDA
Hardware
A800-40G
Models
R1 Q4km
Problem description & steps to reproduce
GGML_SCHED_MAX_BACKENDS asser error, cause I use 16 offloaded gpus and 1 cpu, 17 > 16. Can I increase GGML_SCHED_MAX_BACKENDS to 32, but I tried, raised assert(status) error.
First Bad Commit
No response
Relevant log output
assert