api-for-open-llm icon indicating copy to clipboard operation
api-for-open-llm copied to clipboard

TASKS=llm,rag模式下,出现线程问题报错:RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

Open syusama opened this issue 6 months ago • 0 comments

提交前必须检查以下项目 | The following items must be checked before submission

  • [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。 | Make sure you are using the latest code from the repository (git pull), some issues have already been addressed and fixed.
  • [X] 我已阅读项目文档FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 | I have searched the existing issues / discussions

问题类型 | Type of problem

模型推理和部署 | Model inference and deployment

操作系统 | Operating system

Linux

详细描述问题 | Detailed description of the problem

Ubuntu系统 docker-compose部署 镜像api-llm:vllm

当同时部署llm和embedding模型时

TASKS=llm,rag

会报错: RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

单独部署llm则没有问题

运行日志或截图 | Runtime logs or screenshots

` 微信截图_20240821105553

syusama avatar Aug 21 '24 02:08 syusama