DB-GPT icon indicating copy to clipboard operation
DB-GPT copied to clipboard

[Bug] "POST /api/v1/chat/completions HTTP/1.1" 500 Internal Server Error

Open TongaoY opened this issue 7 months ago • 11 comments

Search before asking

  • [X] I had searched in the issues and found no similar issues.

Operating system information

Linux

Python version information

3.10

DB-GPT version

main

Related scenes

  • [ ] Chat Data
  • [ ] Chat Excel
  • [ ] Chat DB
  • [X] Chat Knowledge
  • [ ] Model Management
  • [ ] Dashboard
  • [ ] Plugins

Installation Information

Device information

GPU

Models information

LLM :chatglm2-6b

What happened

linux服务器部署db-gpt,使用chatglm-6b本地模型,提问时出现错误, "POST /api/v1/chat/completions HTTP/1.1" 500 Internal Server Error 并且问问题不回答

What you expected to happen

linux服务器部署db-gpt,使用chatglm-6b本地模型,提问时出现错误, "POST /api/v1/chat/completions HTTP/1.1" 500 Internal Server Error 并且问问题不回答

How to reproduce

linux服务器部署db-gpt,使用chatglm-6b本地模型,提问时出现错误, "POST /api/v1/chat/completions HTTP/1.1" 500 Internal Server Error 并且问问题不回答

Additional context

No response

Are you willing to submit PR?

  • [X] Yes I am willing to submit a PR!

TongaoY avatar Nov 21 '23 14:11 TongaoY

@TongaoY 辛苦贴一下后台日志和系统信息。

fangyinc avatar Nov 22 '23 01:11 fangyinc

ChatGPT down today.

yihong0618 avatar Nov 22 '23 01:11 yihong0618

now seems back, can you try it again?

yihong0618 avatar Nov 22 '23 06:11 yihong0618

@fangyinc @yihong0618 ,我使用的是liunx服务器, Ubuntu 18.04.6,4090显卡,从源码启动,最新的程序,问题依然存在,界面可以打开,但提问题不回答,后台显示INFO: 211.143.85.94:0 - "POST /api/v1/chat/completions HTTP/1.1" ### 500 Internal Server Error 后台是这样:WARNI [pilot.model.loader] chatglm or chatglm2 not support quantization now, see: https://github.com/huggingface/transformers/issues/25228 Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:08<00:00, 1.26s/it] INFO: 127.0.0.1:60024 - "POST /api/controller/models HTTP/1.1" 200 OK INFO: 127.0.0.1:60044 - "POST /api/controller/models HTTP/1.1" 200 OK begin run _add_app_startup_event INFO: 211.143.85.94:0 - "GET / HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /css/54555572d8a5a0b4.css HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/framework-b88f940c668b92b5.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/29107295-90b90cb30c825230.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/44-941ba89e47567ba3.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/webpack-e014d2ade73ca4a8.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/main-bac3cf9c596e50fb.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/479-68b22ee2b7a47fb3.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/9-bb2c54d5c06ba4bf.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/411-d9eba2657c72f766.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/539-dcd22f1f6b99ebee.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/_app-4f11192f855510e0.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/719-5a18c3c696beda6f.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/index-195d2b50dab810dd.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /jB-omw05-Lu2nmO_jWRqA/_buildManifest.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /jB-omw05-Lu2nmO_jWRqA/_ssgManifest.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /LOGO_1.png HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /LOGO.png HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "POST /api/v1/chat/dialogue/scenes HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /api/v1/chat/dialogue/list HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /favicon.ico HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/45-9ff739c09925ea35.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/61-d2f6cba798a49339.js HTTP/1.1" 200 OK INFO [pilot.openapi.api_v1.api_v1] /controller/model/types INFO [pilot.model.cluster.controller.controller] Get all instances with None, healthy_only: True INFO: 211.143.85.94:0 - "GET /api/v1/model/types HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /WHITE_LOGO.png HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/367-2a6e805cba0c79d3.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /models/chatglm.png HTTP/1.1" 200 OK /prompt/list params: chat_scene=None sub_chat_scene=None prompt_type='common' content=None user_name=None prompt_name=None INFO: 211.143.85.94:0 - "POST /prompt/list HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/815-fa0a8da2d0a72116.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/prompt-f7fa6931310ae8ba.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/442-197e6cbc1e54109a.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "POST /api/v1/chat/dialogue/new HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/928-74244889bd7f2699.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/813-cce9482e33f2430c.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/chat-4b4742a8b97ead61.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/643-d8f53f40dd3c5b40.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/database-2066e1a37e227df6.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /css/4047a8310a399ceb.css HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/75fc9c18-a784766a129ec5fb.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /css/2df6b89b6fe4db33.css HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/355a6ca7.6a7668307202b4ab.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/960de000.b0c6e84211bcbef5.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/365-2cad3676ccbb1b1a.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/830.959c3f306e690976.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/856.7d208912c36b6821.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/304.406e3b6b89d8e49a.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/604.8b28a3b59fbde616.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/63-d9f1013be8e4599a.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/knowledge-40fc3593b22ec10a.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/models-fab5e316babfc6de.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /chunks/pages/agent-25358689f05784e9.js HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "POST /api/v1/chat/dialogue/scenes HTTP/1.1" 200 OK /prompt/list params: chat_scene=None sub_chat_scene=None prompt_type='common' content=None user_name=None prompt_name=None INFO: 211.143.85.94:0 - "POST /prompt/list HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "POST /api/v1/chat/dialogue/new HTTP/1.1" 200 OK chat_completions:chat_normal,,chatglm2-6b INFO [pilot.openapi.api_v1.api_v1] get_chat_instance:conv_uid='e2bdcfc8-8911-11ee-b242-ac1f6b658d4d' user_input='你好' user_name='' chat_mode='chat_normal' select_param='' model_name='chatglm2-6b' incremental=False INFO: 211.143.85.94:0 - "POST /api/v1/chat/completions HTTP/1.1" ### 500 Internal Server Error INFO: 211.143.85.94:0 - "GET /api/v1/chat/dialogue/list HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "POST /api/v1/chat/mode/params/list?chat_mode=chat_normal HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /empty.png HTTP/1.1" 200 OK /prompt/list params: chat_scene=None sub_chat_scene=None prompt_type='common' content=None user_name=None prompt_name=None INFO: 211.143.85.94:0 - "POST /prompt/list HTTP/1.1" 200 OK INFO: 211.143.85.94:0 - "GET /api/v1/feedback/select HTTP/1.1" 200 OK INFO: 123.6.49.10:0 - "GET /chat HTTP/1.1" 307 Temporary Redirect

TongaoY avatar Nov 22 '23 08:11 TongaoY

https://github.com/eosphoros-ai/DB-GPT/issues/798 不知道和我遇见的问题是否一致,我是因为env中没有指定具体数据库导致的

paul-yangmy avatar Nov 27 '23 03:11 paul-yangmy

我使用dbgpt start apiserver --controller_addr http://0.0.0.0:8000 --api_keys None建立了本地API服务

微信图片_20231203113139 总结在浏览器中执行也是500错误

cm04918 avatar Dec 03 '23 03:12 cm04918

兄弟,你问题解决了吗,我也是出现了这个问题,前两天还是正确的今天突然就报500 Internal Server Error了

Han-2002 avatar Dec 18 '23 05:12 Han-2002

兄弟,你问题解决了吗,我也是出现了这个问题,前两天还是正确的今天突然就报500 Internal Server Error了

辛苦贴一下相关上下文和日志

fangyinc avatar Dec 18 '23 08:12 fangyinc

兄弟,你问题解决了吗,我也是出现了这个问题,前两天还是正确的今天突然就报500 Internal Server Error了

辛苦贴一下相关上下文和日志

INFO: 127.0.0.1:39090 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi result = await app( # type: ignore[func-returns-value] File "/opt/conda/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call return await self.app(scope, receive, send) File "/opt/conda/lib/python3.10/site-packages/fastapi/applications.py", line 1106, in call await super().call(scope, receive, send) File "/opt/conda/lib/python3.10/site-packages/starlette/applications.py", line 122, in call await self.middleware_stack(scope, receive, send) File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in call raise exc File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in call await self.app(scope, receive, _send) File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in call await self.app(scope, receive, send) File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in call raise exc File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in call await self.app(scope, receive, sender) File "/opt/conda/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in call raise e File "/opt/conda/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in call await self.app(scope, receive, send) File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 718, in call await route.handle(scope, receive, send) File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 66, in app response = await func(request) File "/opt/conda/lib/python3.10/site-packages/fastapi/routing.py", line 274, in app raw_response = await run_endpoint_function( File "/opt/conda/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(**values) File "/home/cxt/ChatGLM3/openai_api_demo/openai_api.py", line 150, in create_chat_completion response = generate_chatglm3(model, tokenizer, gen_params) File "/home/cxt/ChatGLM3/openai_api_demo/utils.py", line 217, in generate_chatglm3 for response in generate_stream_chatglm3(model, tokenizer, params): File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) File "/home/cxt/ChatGLM3/openai_api_demo/utils.py", line 138, in generate_stream_chatglm3 for total_ids in model.stream_generate(**inputs, eos_token_id=eos_token_id, **gen_kwargs): File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) File "/root/.cache/huggingface/modules/transformers_modules/ChatGLM3-6B/modeling_chatglm.py", line 1156, in stream_generate outputs = self( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/ChatGLM3-6B/modeling_chatglm.py", line 937, in forward transformer_outputs = self.transformer( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/ChatGLM3-6B/modeling_chatglm.py", line 807, in forward inputs_embeds = self.embedding(input_ids) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/ChatGLM3-6B/modeling_chatglm.py", line 723, in forward words_embeddings = self.word_embeddings(input_ids) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 39.41 GiB total capacity; 11.68 GiB already allocated; 4.56 MiB free; 11.68 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Han-2002 avatar Dec 18 '23 09:12 Han-2002

看日志就是没显存了。

fangyinc avatar Dec 18 '23 09:12 fangyinc

看日志就是没显存了。

嗯嗯,我看了也是,但是这种情况有没有可能是服务器的问题,只能等服务器恢复了

Han-2002 avatar Dec 18 '23 09:12 Han-2002

This issue has been marked as stale, because it has been over 30 days without any activity.

github-actions[bot] avatar Jan 21 '24 21:01 github-actions[bot]

This issue bas been closed, because it has been marked as stale and there has been no activity for over 7 days.

github-actions[bot] avatar Jan 29 '24 21:01 github-actions[bot]