llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Eval bug: GGML_SCHED_MAX_BACKENDS assert error

Open wuyaoxuehun opened this issue 11 months ago • 2 comments

Name and Version

latest version

Operating systems

Linux

GGML backends

CUDA

Hardware

A800-40G

Models

R1 Q4km

Problem description & steps to reproduce

GGML_SCHED_MAX_BACKENDS asser error, cause I use 16 offloaded gpus and 1 cpu, 17 > 16. Can I increase GGML_SCHED_MAX_BACKENDS to 32, but I tried, raised assert(status) error.

First Bad Commit

No response

Relevant log output

assert

wuyaoxuehun avatar Jan 26 '25 19:01 wuyaoxuehun

why GGML_SCHED_MAX_BACKENDS max val is 16?

lld1995 avatar Feb 19 '25 08:02 lld1995

Can we recompile llama.cpp and change the MAX constant to more? @rgerganov

VergeDX avatar Feb 20 '25 07:02 VergeDX

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 07 '25 01:04 github-actions[bot]