Connection refused when I try RAG
So, I am using unsloth/Llama-3.2-1B for simple RAG interaction. But, I get an error stating ConnectionRefused.
Currently, I have started transformerLab on a remote instance and then use it locally on my computer.
Here is the short traceback
Loaded 27 docsRetrying llama_index.llms.openai.base.OpenAI._chat in 1.0 seconds as it raised APIConnectionError: Connection error..\nRetrying llama_index.llms.openai.base.OpenAI._chat in 1.4949088837795608 seconds as it raised APIConnectionError: Connection error..\nTraceback (most recent call last):\n File ".transformerlab/envs/transformerlab/lib/python3.11/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions\n yield\n File "transformerlab/envs/transformerlab/lib/python3.11/site-packages/httpx/_transports/default.py", line 250, in handle_request\n resp = self._pool.handle_request(req)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File ".transformerlab/envs/transformerlab/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request\n raise exc from None\n File "transformerlab/envs/transformerlab/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request\n response = connection.handle_request(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n
Hi @sandeep-selvaraj, It seems like the model somehow got killed on your remote machine where the API was running. Just to make sure a couple of things:
- Was your model running when you sent the request?
- For doing the remote connections, did you start the API separately on your remote machine and connected it to the App or did you follow some other process?
- Would you be able to attach the file called
local_server.log? I can try to figure out if something else went wrong while you were trying out RAG. This file can be found in the directory~/.transformerlabof the machine where your API is running.
Closing this now as we've verified the remote as well as local instances and this was raised for an older version. Please do reopen and tag me incase this is still happening. Thanks!