调用接口报错 raise KeyError(key) from None KeyError: 'HOME'
System Info / 系統信息
python3.10.11
Who can help? / 谁可以帮助到您?
模型加载正常,调用报错。 demoweb调用报2024-05-22 22:54:37 - Translation file for zh-CN not found. Using default translation en-US. 2024-05-22 22:54:44 - Translation file for zh-CN not found. Using default translation en-US.
通过api接口调用报:KeyError: 'HOME'
2024-05-23 11:40:55.606 | DEBUG | main:generate_stream_cogvlm:301 - ==== request ====
Do you think this is a spring or winter photo?
INFO: 127.0.0.1:58760 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\fastapi\applications.py", line 1054, in call
await super().call(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\middleware\errors.py", line 186, in call
raise exc
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\middleware\errors.py", line 164, in call
await self.app(scope, receive, _send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\middleware\cors.py", line 85, in call
await self.app(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\middleware\exceptions.py", line 65, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\routing.py", line 756, in call
await self.middleware_stack(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\routing.py", line 776, in app
await route.handle(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\routing.py", line 297, in handle
await self.app(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\starlette\routing.py", line 72, in app
response = await func(request)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\fastapi\routing.py", line 278, in app
raw_response = await run_endpoint_function(
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "D:\app\CogVLM2\basic_demo\openai_api_demo.py", line 162, in create_chat_completion
response = generate_cogvlm(model, tokenizer, gen_params)
File "D:\app\CogVLM2\basic_demo\openai_api_demo.py", line 228, in generate_cogvlm
for response in generate_stream_cogvlm(model, tokenizer, params):
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "D:\app\CogVLM2\basic_demo\openai_api_demo.py", line 334, in generate_stream_cogvlm
model.generate(**inputs, **gen_kwargs)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\transformers\generation\utils.py", line 1736, in generate
result = self._sample(
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\transformers\generation\utils.py", line 2375, in _sample outputs = self(
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\modeling_cogvlm.py", line 620, in forward
outputs = self.model(
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\modeling_cogvlm.py", line 402, in forward
return self.llm_forward(
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\modeling_cogvlm.py", line 486, in llm_forward
layer_outputs = decoder_layer(
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\modeling_cogvlm.py", line 261, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\modeling_cogvlm.py", line 204, in forward
query_states, key_states = self.rotary_emb(query_states, key_states, position_ids=position_ids, max_seqlen=position_ids.max() + 1)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\util.py", line 469, in forward
q = apply_rotary_emb_func(
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\util.py", line 329, in apply_rotary_emb
return ApplyRotaryEmb.apply(
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\torch\autograd\function.py", line 598, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\util.py", line 255, in forward
out = apply_rotary(
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\cogvlm2-llama3-chinese-chat-19B\util.py", line 212, in apply_rotary
rotary_kernel[grid](
File "C:\ProgramData\anaconda3\envs\cogvlm3\lib\site-packages\triton\runtime\jit.py", line 106, in launcher
return self.run(*args, grid=grid, **kwargs)
File "
Information / 问题信息
- [ ] The official example scripts / 官方的示例脚本
- [ ] My own modified scripts / 我自己修改的脚本和任务
Reproduction / 复现过程
起服务调用接口报错
Expected behavior / 期待表现
哪位大佬知道如何解决
我看报错信息是:fatal error C1083: 无法打开包括文件: “Python.h”: No such file or directory,应该是 Triton 本地编译某些 CUDA 组件的时候,需要这个库,我就把当前 Python 环境的 Python311\include 放到 环境变量 INCLUDE 里了。在我环境中,具体是这样添加的,分别添加 CUDA、Visual Studio、Windows Kits 和 Python 相关的 include 路径: INCLUDE=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\include;C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.29.30133\include;C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um;C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt;C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\shared;D:\AITest\CogVLM2\Python311\include; 以上,供您参考。。。
我看你的环境部署比较复杂,好像是 Triton 编译后的 .cache 包缓存不确定放置到哪,我这边很容易找到,就在当前 administrator 用户主目录下:C:\Users\Administrator.triton\cache。我的 Win11 是运行在 Administrator 用户下的,我也没有用 anaconda 等做虚拟环境管理,是手工切换管理的。VS、CUDA、CuDNN、Python 等,基础环境通过自己的配置脚本,手工切换。