MiniCPM-V icon indicating copy to clipboard operation
MiniCPM-V copied to clipboard

[vllm] - 一张A10,vllm api方式启动,报显存不足

Open thend-wk opened this issue 1 year ago • 3 comments

起始日期 | Start Date

10/17/2024

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

其他

基本示例 | Basic Example

其他

缺陷 | Drawbacks

其他

未解决问题 | Unresolved questions

一张A10显卡,显存24G,启动命令如下: vllm serve /docker_storage/models/MiniCPM-V-2_6 --dtype auto --max-model-len 2048 --gpu_memory_utilization 1 --trust-remote-code

报错信息如下: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB. GPU 0 has a total capacity of 22.18 GiB of which 3.47 GiB is free. Process 8007 has 18.71 GiB memory in use. Of the allocated memory 18.29 GiB is allocated by PyTorch, and 142.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

thend-wk avatar Oct 17 '24 09:10 thend-wk

你好,请你将gpu_memory_utilization调整为0.7-0.9的范围再试试

LDLINGLINGLING avatar Oct 18 '24 09:10 LDLINGLINGLING

你好,请你将gpu_memory_utilization调整为0.7-0.9的范围再试试

遇到类似的内存问题,3090卡,单卡,无法运行。。

evanlin88 avatar Nov 18 '24 00:11 evanlin88

你好,请你将gpu_memory_utilization调整为0.7-0.9的范围再试试

遇到类似的内存问题,3090卡,单卡,无法运行。。

参考这个 https://github.com/OpenBMB/MiniCPM-V/issues/504

evanlin88 avatar Nov 18 '24 01:11 evanlin88

你用的是他们的VLLM仓库吗?

DAAworld avatar Jan 17 '25 02:01 DAAworld