private-gpt
private-gpt copied to clipboard
[BUG] TypeError: missing a required argument: 'messages'
Pre-check
- [X] I have searched the existing issues and none cover this bug.
Description
When running the docker instance of privategpt with Ollama, I get an error saying: TypeError: missing a required argument: 'messages'
"Search" mode works, but any mode with the LLM called produces this error. I am using the normal gradio UI§§§ The full traceback is as follows:
private-gpt-ollama-1 | 18:00:31.961 [INFO ] uvicorn.access - 172.18.0.1:62074 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:00:31.980 [INFO ] uvicorn.access - 172.18.0.1:55394 - "POST /queue/join HTTP/1.1" 200 private-gpt-ollama-1 | 18:00:31.982 [INFO ] uvicorn.access - 172.18.0.1:55394 - "GET /queue/data?session_hash=gjx9zkk6hbu HTTP/1.1" 200 private-gpt-ollama-1 | Traceback (most recent call last): private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/queueing.py", line 536, in process_events private-gpt-ollama-1 | response = await route_utils.call_process_api( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api private-gpt-ollama-1 | output = await app.get_blocks().process_api( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/blocks.py", line 1923, in process_api private-gpt-ollama-1 | result = await self.call_function( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/blocks.py", line 1520, in call_function private-gpt-ollama-1 | prediction = await utils.async_iteration(iterator) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 663, in async_iteration private-gpt-ollama-1 | return await iterator.anext() private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 768, in asyncgen_wrapper private-gpt-ollama-1 | response = await iterator.anext() private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/chat_interface.py", line 652, in _stream_fn private-gpt-ollama-1 | first_response = await async_iteration(generator) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 663, in async_iteration private-gpt-ollama-1 | return await iterator.anext() private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 656, in anext private-gpt-ollama-1 | return await anyio.to_thread.run_sync( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync private-gpt-ollama-1 | return await get_async_backend().run_sync_in_worker_thread( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread private-gpt-ollama-1 | return await future private-gpt-ollama-1 | ^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 859, in run private-gpt-ollama-1 | result = context.run(func, *args) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 639, in run_sync_iterator_async private-gpt-ollama-1 | return next(iterator) private-gpt-ollama-1 | ^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/private_gpt/ui/ui.py", line 185, in _chat private-gpt-ollama-1 | query_stream = self._chat_service.stream_chat( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/private_gpt/server/chat/chat_service.py", line 168, in stream_chat private-gpt-ollama-1 | streaming_response = chat_engine.stream_chat( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/instrumentation/dispatcher.py", line 230, in wrapper private-gpt-ollama-1 | result = func(*args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper private-gpt-ollama-1 | return func(self, *args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/chat_engine/context.py", line 210, in stream_chat private-gpt-ollama-1 | chat_stream=self._llm.stream_chat(all_messages), private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/private_gpt/components/llm/llm_component.py", line 183, in wrapper private-gpt-ollama-1 | return func(*args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/instrumentation/dispatcher.py", line 221, in wrapper private-gpt-ollama-1 | bound_args = inspect.signature(func).bind(*args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/usr/local/lib/python3.11/inspect.py", line 3212, in bind private-gpt-ollama-1 | return self._bind(args, kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/usr/local/lib/python3.11/inspect.py", line 3127, in _bind private-gpt-ollama-1 | raise TypeError(msg) from None private-gpt-ollama-1 | TypeError: missing a required argument: 'messages' private-gpt-ollama-1 | 18:01:15.212 [INFO ] uvicorn.access - 172.18.0.1:58670 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:54.097 [INFO ] uvicorn.access - 172.18.0.1:61960 - "POST /queue/join HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:54.099 [INFO ] uvicorn.access - 172.18.0.1:61960 - "GET /queue/data?session_hash=gjx9zkk6hbu HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:54.122 [INFO ] private_gpt.ui.ui - Setting system prompt to: private-gpt-ollama-1 | 18:03:55.953 [INFO ] uvicorn.access - 172.18.0.1:61960 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:55.971 [INFO ] uvicorn.access - 172.18.0.1:61960 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:55.972 [INFO ] uvicorn.access - 172.18.0.1:59612 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:55.987 [INFO ] uvicorn.access - 172.18.0.1:61960 - "POST /queue/join HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:55.989 [INFO ] uvicorn.access - 172.18.0.1:61960 - "GET /queue/data?session_hash=gjx9zkk6hbu HTTP/1.1" 200 private-gpt-ollama-1 | 18:03:56.960 [INFO ] uvicorn.access - 172.18.0.1:59612 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:08:30.668 [INFO ] uvicorn.access - 172.18.0.1:59372 - "POST /queue/join HTTP/1.1" 200 private-gpt-ollama-1 | 18:08:30.670 [INFO ] uvicorn.access - 172.18.0.1:59372 - "GET /queue/data?session_hash=gjx9zkk6hbu HTTP/1.1" 200 private-gpt-ollama-1 | 18:08:30.702 [INFO ] private_gpt.ui.ui - Setting system prompt to: You are an AI engine private-gpt-ollama-1 | private-gpt-ollama-1 | 18:08:32.171 [INFO ] uvicorn.access - 172.18.0.1:59372 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:08:32.188 [INFO ] uvicorn.access - 172.18.0.1:59382 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:08:32.189 [INFO ] uvicorn.access - 172.18.0.1:59372 - "POST /run/predict HTTP/1.1" 200 private-gpt-ollama-1 | 18:08:32.204 [INFO ] uvicorn.access - 172.18.0.1:59372 - "POST /queue/join HTTP/1.1" 200 private-gpt-ollama-1 | 18:08:32.207 [INFO ] uvicorn.access - 172.18.0.1:59372 - "GET /queue/data?session_hash=gjx9zkk6hbu HTTP/1.1" 200 private-gpt-ollama-1 | Traceback (most recent call last): private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/queueing.py", line 536, in process_events private-gpt-ollama-1 | response = await route_utils.call_process_api( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api private-gpt-ollama-1 | output = await app.get_blocks().process_api( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/blocks.py", line 1923, in process_api private-gpt-ollama-1 | result = await self.call_function( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/blocks.py", line 1520, in call_function private-gpt-ollama-1 | prediction = await utils.async_iteration(iterator) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 663, in async_iteration private-gpt-ollama-1 | return await iterator.anext() private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 768, in asyncgen_wrapper private-gpt-ollama-1 | response = await iterator.anext() private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/chat_interface.py", line 652, in _stream_fn private-gpt-ollama-1 | first_response = await async_iteration(generator) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 663, in async_iteration private-gpt-ollama-1 | return await iterator.anext() private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 656, in anext private-gpt-ollama-1 | return await anyio.to_thread.run_sync( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync private-gpt-ollama-1 | return await get_async_backend().run_sync_in_worker_thread( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread private-gpt-ollama-1 | return await future private-gpt-ollama-1 | ^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 859, in run private-gpt-ollama-1 | result = context.run(func, *args) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/gradio/utils.py", line 639, in run_sync_iterator_async private-gpt-ollama-1 | return next(iterator) private-gpt-ollama-1 | ^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/private_gpt/ui/ui.py", line 185, in _chat private-gpt-ollama-1 | query_stream = self._chat_service.stream_chat( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/private_gpt/server/chat/chat_service.py", line 168, in stream_chat private-gpt-ollama-1 | streaming_response = chat_engine.stream_chat( private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/instrumentation/dispatcher.py", line 230, in wrapper private-gpt-ollama-1 | result = func(*args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper private-gpt-ollama-1 | return func(self, *args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/chat_engine/context.py", line 210, in stream_chat private-gpt-ollama-1 | chat_stream=self._llm.stream_chat(all_messages), private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/private_gpt/components/llm/llm_component.py", line 183, in wrapper private-gpt-ollama-1 | return func(*args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/core/instrumentation/dispatcher.py", line 221, in wrapper private-gpt-ollama-1 | bound_args = inspect.signature(func).bind(*args, **kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/usr/local/lib/python3.11/inspect.py", line 3212, in bind private-gpt-ollama-1 | return self._bind(args, kwargs) private-gpt-ollama-1 | ^^^^^^^^^^^^^^^^^^^^^^^^ private-gpt-ollama-1 | File "/usr/local/lib/python3.11/inspect.py", line 3127, in _bind private-gpt-ollama-1 | raise TypeError(msg) from None private-gpt-ollama-1 | TypeError: missing a required argument: 'messages'
Steps to Reproduce
I build the privategpt package and ran in docker. Modified the model to llama3.1:70b. Ollama is run outside of docker on 11434
Expected Behavior
Generate a response
Actual Behavior
Missing message argument
Environment
Ubuntu 20.04, RTX A6000 ADA
Additional Information
No response
Version
No response
Setup Checklist
- [X] Confirm that you have followed the installation instructions in the project’s documentation.
- [X] Check that you are using the latest version of the project.
- [X] Verify disk space availability for model storage and data processing.
- [X] Ensure that you have the necessary permissions to run the project.
NVIDIA GPU Setup Checklist
- [X] Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to CUDA's documentation)
- [X] Ensure an NVIDIA GPU is installed and recognized by the system (run
nvidia-smi
to verify). - [X] Ensure proper permissions are set for accessing GPU resources.
- [X] Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run
sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
)