QAnything icon indicating copy to clipboard operation
QAnything copied to clipboard

ValueError: The model's max seq len (4096) is larger than the maximum number of tokens that can be stored in KV cache (3792). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.

Open Hzzhang-nlp opened this issue 10 months ago • 4 comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • [X] 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

Python环境安装,运行bash scripts/run_for_7B_in_Linux_or_WSL.sh,报错: ValueError: The model's max seq len (4096) is larger than the maximum number of tokens that can be stored in KV cache (3792). Try increasing gpu_memory_utilization or decreasing max_model_len when initializing the engine.

期望行为 | Expected Behavior

No response

运行环境 | Environment

- OS:ubuntu22.04
- NVIDIA Driver:
- CUDA:12.2
- docker:
- docker-compose:
- NVIDIA GPU:RTX 4090
- NVIDIA GPU Memory:24GB

QAnything日志 | QAnything logs

No response

复现方法 | Steps To Reproduce

No response

备注 | Anything else?

No response

Hzzhang-nlp avatar Apr 24 '24 07:04 Hzzhang-nlp

bash ./run.sh -c local -i 0 -b vllm -m Qwen-7B-QAnything -t qwen-7b-qanything -p 1 -r 0.85 将后面的0.85改成0.95试一下

huweibin1983 avatar Apr 29 '24 08:04 huweibin1983

问题解决了吗?为也遇到类似问题

Ericocococo avatar May 07 '24 01:05 Ericocococo

4090 24G 显卡 纯python部署 代码分支 qanything-python 搜索 gpu_memory_utilization 字段, 修改文件qanything_kernel/utils/general_utils.py 205行 gpu_memory_utilization = 0.89; 208行7B模型修改也修改0.89; 图片 这里字段gpu_memory_in_GB打印应该为24G

希望能帮助后面的人。

hgsw avatar May 09 '24 11:05 hgsw

round(8 / gpu_memory_in_GB, 2) 中的 8 改成 10

mrhan1993 avatar May 28 '24 08:05 mrhan1993