llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Eval bug: GGML_SCHED_MAX_BACKENDS assert error

Open wuyaoxuehun opened this issue 3 weeks ago • 2 comments

Name and Version

latest version

Operating systems

Linux

GGML backends

CUDA

Hardware

A800-40G

Models

R1 Q4km

Problem description & steps to reproduce

GGML_SCHED_MAX_BACKENDS asser error, cause I use 16 offloaded gpus and 1 cpu, 17 > 16. Can I increase GGML_SCHED_MAX_BACKENDS to 32, but I tried, raised assert(status) error.

First Bad Commit

No response

Relevant log output

assert

wuyaoxuehun avatar Jan 26 '25 19:01 wuyaoxuehun