QAnything
QAnything copied to clipboard
torch.cuda.OutOfMemoryError: CUDA out of memory,通过CUDA_VISIBLE_DEVICES=0,1,2,3参数设置多卡不生效
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
torch.cuda.OutOfMemoryError: CUDA out of memory.
期望行为 | Expected Behavior
期望可以通过CUDA_VISIBLE_DEVICES=0,1,2,3,...参数设置多张GPU卡, python3 -m qanything_kernel.qanything_server.sanic_api --host 0.0.0.0 --port 8777 --model_size 7B 可以正常运行
运行环境 | Environment
- OS:Ubuntu 22.04.4 LTS
- NVIDIA Driver: 550.54.14
- CUDA:12.4
- docker: 纯Python环境安装
- docker-compose:纯Python环境安装
- NVIDIA GPU: NVIDIA GeForce RTX 4090 * 8
- NVIDIA GPU Memory: 24GB * 8
QAnything日志 | QAnything logs
ERROR:asyncio:Exception in callback functools.partial(<function _raise_exception_on_finish at 0x700d87dec5e0>, request_tracker=<vllm.engine.async_llm_engine.RequestTracker object at 0x701010ca15a0>) handle: <Handle functools.partial(<function _raise_exception_on_finish at 0x700d87dec5e0>, request_tracker=<vllm.engine.async_llm_engine.RequestTracker object at 0x701010ca15a0>)> Traceback (most recent call last): File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 28, in _raise_exception_on_finish task.result() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 363, in run_engine_loop has_requests_in_progress = await self.engine_step() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 342, in engine_step request_outputs = await self.engine.step_async() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 190, in step_async all_outputs = await self._run_workers_async( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 231, in _run_workers_async all_outputs = await asyncio.gather(*coros) File "/home/ps/miniconda3/envs/test310/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/worker/worker.py", line 189, in execute_model output = self.model_runner.execute_model(seq_group_metadata_list, File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 453, in execute_model hidden_states = model_executable( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 242, in forward hidden_states = self.transformer(input_ids, positions, kv_caches, File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 210, in forward hidden_states, residual = layer( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 174, in forward hidden_states = self.mlp(hidden_states) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 59, in forward gate_up, _ = self.gate_up_proj(x) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 211, in forward output_parallel = self.linear_method.apply_weights( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 72, in apply_weights return F.linear(x, weight, bias) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 132.00 MiB. GPU 0 has a total capacty of 23.64 GiB of which 67.06 MiB is free. Process 250345 has 662.00 MiB memory in use. Including non-PyTorch memory, this process has 22.90 GiB memory in use. Of the allocated memory 18.70 GiB is allocated by PyTorch, and 353.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 37, in _raise_exception_on_finish raise exc File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 32, in _raise_exception_on_finish raise AsyncEngineDeadError( vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause. INFO 04-28 09:24:19 async_llm_engine.py:134] Aborted request 11b86abd443d440283af90e67d08a237. [2024-04-28 09:24:19 +0800] [394915] [ERROR] Exception occurred while handling uri: 'http://110.53.183.135:8777/api/local_doc_qa/local_doc_chat' Traceback (most recent call last): File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 28, in _raise_exception_on_finish task.result() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 363, in run_engine_loop has_requests_in_progress = await self.engine_step() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 342, in engine_step request_outputs = await self.engine.step_async() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 190, in step_async all_outputs = await self._run_workers_async( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 231, in _run_workers_async all_outputs = await asyncio.gather(*coros) File "/home/ps/miniconda3/envs/test310/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/worker/worker.py", line 189, in execute_model output = self.model_runner.execute_model(seq_group_metadata_list, File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 453, in execute_model hidden_states = model_executable( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 242, in forward hidden_states = self.transformer(input_ids, positions, kv_caches, File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 210, in forward hidden_states, residual = layer( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 174, in forward hidden_states = self.mlp(hidden_states) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 59, in forward gate_up, _ = self.gate_up_proj(x) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 211, in forward output_parallel = self.linear_method.apply_weights( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 72, in apply_weights return F.linear(x, weight, bias) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 132.00 MiB. GPU 0 has a total capacty of 23.64 GiB of which 67.06 MiB is free. Process 250345 has 662.00 MiB memory in use. Including non-PyTorch memory, this process has 22.90 GiB memory in use. Of the allocated memory 18.70 GiB is allocated by PyTorch, and 353.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "handle_request", line 132, in handle_request "_asgi_lifespan", File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/sanic/response/types.py", line 547, in stream await self.streaming_fn(self) File "/home/wangg/dev/QAnything/qanything_kernel/qanything_server/handler.py", line 398, in generate_answer async for resp, next_history in local_doc_qa.get_knowledge_based_answer(custom_prompt=custom_prompt, File "/home/wangg/dev/QAnything/qanything_kernel/core/local_doc_qa.py", line 275, in get_knowledge_based_answer async for answer_result in self.llm.generatorAnswer(prompt=prompt, File "/home/wangg/dev/QAnything/qanything_kernel/connector/llm/llm_for_fastchat.py", line 114, in generatorAnswer async for response_text in response: File "/home/wangg/dev/QAnything/qanything_kernel/connector/llm/llm_for_fastchat.py", line 88, in _call async for request_output in results_generator: File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 449, in generate raise e File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 443, in generate async for request_output in stream: File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 70, in anext raise result File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 37, in _raise_exception_on_finish raise exc File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 32, in _raise_exception_on_finish raise AsyncEngineDeadError( vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause. ERROR:sanic.error:Exception occurred while handling uri: 'http://110.53.183.135:8777/api/local_doc_qa/local_doc_chat' Traceback (most recent call last): File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 28, in _raise_exception_on_finish task.result() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 363, in run_engine_loop has_requests_in_progress = await self.engine_step() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 342, in engine_step request_outputs = await self.engine.step_async() File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 190, in step_async all_outputs = await self._run_workers_async( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 231, in _run_workers_async all_outputs = await asyncio.gather(*coros) File "/home/ps/miniconda3/envs/test310/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/worker/worker.py", line 189, in execute_model output = self.model_runner.execute_model(seq_group_metadata_list, File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 453, in execute_model hidden_states = model_executable( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 242, in forward hidden_states = self.transformer(input_ids, positions, kv_caches, File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 210, in forward hidden_states, residual = layer( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 174, in forward hidden_states = self.mlp(hidden_states) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/models/qwen.py", line 59, in forward gate_up, _ = self.gate_up_proj(x) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 211, in forward output_parallel = self.linear_method.apply_weights( File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 72, in apply_weights return F.linear(x, weight, bias) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 132.00 MiB. GPU 0 has a total capacty of 23.64 GiB of which 67.06 MiB is free. Process 250345 has 662.00 MiB memory in use. Including non-PyTorch memory, this process has 22.90 GiB memory in use. Of the allocated memory 18.70 GiB is allocated by PyTorch, and 353.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "handle_request", line 132, in handle_request "_asgi_lifespan", File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/sanic/response/types.py", line 547, in stream await self.streaming_fn(self) File "/home/wangg/dev/QAnything/qanything_kernel/qanything_server/handler.py", line 398, in generate_answer async for resp, next_history in local_doc_qa.get_knowledge_based_answer(custom_prompt=custom_prompt, File "/home/wangg/dev/QAnything/qanything_kernel/core/local_doc_qa.py", line 275, in get_knowledge_based_answer async for answer_result in self.llm.generatorAnswer(prompt=prompt, File "/home/wangg/dev/QAnything/qanything_kernel/connector/llm/llm_for_fastchat.py", line 114, in generatorAnswer async for response_text in response: File "/home/wangg/dev/QAnything/qanything_kernel/connector/llm/llm_for_fastchat.py", line 88, in _call async for request_output in results_generator: File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 449, in generate raise e File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 443, in generate async for request_output in stream: File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 70, in anext raise result File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 37, in _raise_exception_on_finish raise exc File "/home/ps/miniconda3/envs/test310/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 32, in _raise_exception_on_finish raise AsyncEngineDeadError( vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
复现方法 | Steps To Reproduce
上传文档后 -- 返回对话 -- 进行对话就报内存不足的问题
备注 | Anything else?
No response
我也遇到了同样的问题,楼主解决这个问题了吗
我也遇到了同样的问题,楼主解决这个问题了吗
python原生安装不行,用docker方式支持多卡,我改docker方式了。
我也遇到了同样的问题,楼主解决这个问题了吗
python原生安装不行,用docker方式支持多卡,我改docker方式了。
docker方式在2张8G的卡上运行?
还有docker版本支持本地知识库问答?