LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

ChatLocalAI Module Connection Issue: ECONNREFUSED Error

Open nattyraz opened this issue 1 year ago • 10 comments

LocalAI version:

Latest version as of June 11, 2023

Environment, CPU architecture, OS, and Version:

Windows 11 Pro (Intel), CPU: i9-10900K, RAM: 32GB

Describe the bug

I am unable to connect to the LocalAI server from Flowise using the ChatLocalAI module in the Conversational Retrieval QA Chain. When I send a request, I receive an ECONNREFUSED error on 127.0.0.1:8080. This occurs despite the server appearing to be running correctly according to the Docker logs, the docker ps command, and the successful response from curl http://localhost:8080/v1/models.

To Reproduce

Clone and set up the LocalAI repository. Run docker-compose up -d --pull always to start the server. In Flowise, use the ChatLocalAI module within the Conversational Retrieval QA Chain to send a request. See ECONNREFUSED error. Expected behavior

I expected to be able to connect to the LocalAI server at localhost:8080 and send requests from Flowise using the ChatLocalAI module within the Conversational Retrieval QA Chain.

Logs

The Docker logs appear to indicate that the server is starting correctly. There is no obvious error message. The response from curl http://localhost:8080/v1/models is successful.

Additional context

I have installed Flowise and LocalAI locally on my machine, both using Docker. I have also ensured that the appropriate model files are present in the /models directory. Pinecone and OpenAI API are set up correctly. Flowise is accessible at http://localhost:3000/ and LocalAI is accessible at http://localhost:8080/.

nattyraz avatar Jun 11 '23 15:06 nattyraz

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful :smile_cat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

Based on the information provided, it seems like the issue is related to the chatbot-ui service not being able to connect to the LocalAI server. Here are some possible solutions or debugging steps:

  1. Ensure that the OPENAI_API_HOST environment variable in the docker-compose.yml file is set to the correct IP address or FQDN of the LocalAI server. This should match the IP address or FQDN that the chatbot-ui service tries to access.
  2. Check if there are any firewall or network issues that may be blocking the chatbot-ui service from accessing the LocalAI server. Try disabling any firewalls or network filters and try again.
  3. Ensure that the OPENAI_API_KEY environment variable in the docker-compose.yml file is set to the correct OpenAI API key for your account. This should be a 64-character hexadecimal string that you can obtain from the OpenAI website or CLI.
  4. Check if there are any issues with the LocalAI server itself by trying to access it directly using curl or another tool. If this fails, check the Docker logs and system logs for any error messages or stack traces.
  5. Ensure that the chatbot-ui service is running correctly by checking the Docker logs and system logs for any error messages or stack traces.
  6. Try restarting both the LocalAI server and the chatbot-ui service to see if this resolves the issue.

Sources:

  • https://github.com/go-skynet/LocalAI/tree/master/.github/ISSUE_TEMPLATE/bug_report.md
  • https://github.com/go-skynet/LocalAI/tree/master/examples/chatbot-ui-manual/docker-compose.yaml
  • https://github.com/go-skynet/LocalAI/tree/master/examples/chatbot-ui-manual/README.md
  • https://github.com/go-skynet/LocalAI/tree/master/examples/chatbot-ui/README.md

localai-bot avatar Jul 18 '23 19:07 localai-bot

Having the same issue. Using docker for both Flowise and LocalAI. Both are installed and work independently of one another but not together.

curl http://localhost:8080/v1/models
{"object":"list","data":[{"id":"openassistant-llama2-13b-orca-8k-3319.ggmlv3.q8_0.bin","object":"model"},{"id":"~BROMIUM","object":"model"}]}(

System Specs

OS Name:                   Microsoft Windows 11 Pro
OS Version:                10.0.22621 N/A Build 22621
OS Manufacturer:           Microsoft Corporation
OS Configuration:          Standalone Workstation
OS Build Type:             Multiprocessor Free
System Model:              HP ZBook Studio 15.6 inch G8 Mobile Workstation PC
System Type:               x64-based PC
Processor(s):              1 Processor(s) Installed.
                           [01]: Intel64 Family 6 Model 141 Stepping 1 GenuineIntel ~2611 Mhz
BIOS Version:              HP T92 Ver. 01.13.01, 3/31/2023
Total Physical Memory:     32,432 MB

GPU 1

	NVIDIA RTX A5000 Laptop GPU

	Driver version:	31.0.15.3625
	Driver date:	6/10/2023
	DirectX version:	12 (FL 12.1)
	Physical location:	PCI bus 1, device 0, function 0

	Utilization	0%
	Dedicated GPU memory	0.0/16.0 GB
	Shared GPU memory	0.0/15.8 GB
	GPU Memory	0.0/31.8 GB

Inside docker_flowise_1

2023-08-01 17:51:09 2023-08-01 21:51:09 [INFO]: ⬆️ POST /api/v1/chatmessage/b9af940f-a720-46d0-899a-309ebf4ff59c
2023-08-01 17:51:09 2023-08-01 21:51:09 [INFO]: ⬆️ POST /api/v1/internal-prediction/b9af940f-a720-46d0-899a-309ebf4ff59c
2023-08-01 17:52:37 2023-08-01 21:52:37 [ERROR]: [server]: Error: connect ECONNREFUSED 127.0.0.1:8080
2023-08-01 17:52:37 Error: connect ECONNREFUSED 127.0.0.1:8080
2023-08-01 17:52:37     at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
2023-08-01 17:52:37 2023-08-01 21:52:37 [INFO]: ⬆️ POST /api/v1/chatmessage/b9af940f-a720-46d0-899a-309ebf4ff59c

Inside LocalAI-Docker


2023-08-01 03:38:31 CPU info:
2023-08-01 03:38:31 model name  : 11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz
2023-08-01 03:38:31 flags               : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b avx512_vp2intersect flush_l1d arch_capabilities
2023-08-01 03:38:31 CPU:    AVX    found OK
2023-08-01 03:38:31 CPU:    AVX2   found OK
2023-08-01 03:38:31 CPU:    AVX512 found OK
2023-08-01 03:38:31 @@@@@
2023-08-01 03:38:32 
2023-08-01 03:38:32  ┌───────────────────────────────────────────────────┐ 
2023-08-01 03:38:32  │                   Fiber v2.48.0                   │ 
2023-08-01 03:38:32  │               http://127.0.0.1:8080               │ 
2023-08-01 03:38:32  │       (bound on host 0.0.0.0 and port 8080)       │ 
2023-08-01 03:38:32  │                                                   │ 
2023-08-01 03:38:32  │ Handlers ............ 31  Processes ........... 1 │ 
2023-08-01 03:38:32  │ Prefork ....... Disabled  PID ................ 14 │ 
2023-08-01 03:38:32  └───────────────────────────────────────────────────┘ 
2023-08-01 03:38:32 
2023-08-01 03:38:31 7:38AM DBG no galleries to load
2023-08-01 03:38:31 7:38AM INF Starting LocalAI using 4 threads, with models path: /models
2023-08-01 03:38:31 7:38AM INF LocalAI version: v1.23.0-19-gae36bae (ae36b

mark-ov avatar Aug 01 '23 21:08 mark-ov

Having the same issue here. Tried running on different port with same result.

danorlando avatar Aug 16 '23 23:08 danorlando

Having the same issue here. Tried running on different port with same result.

Same here, getting the same error and switching to a different port won't fix the problem. Same OS and similar hardware as the creator of this issue.

roos-robert avatar Aug 18 '23 22:08 roos-robert

I was able to resolve this issue by using 'http://host.docker.internal:8080/v1" as the Base Path for the ChatLocalAI module in Flowise.

OrsonAround avatar Aug 19 '23 21:08 OrsonAround

I had the same issue - and it took me some time to find out why: looks like a trailing slash in Base Path leads to this error.

Would it perhaps make sense to remove a trailing slash close to https://github.com/FlowiseAI/Flowise/blob/main/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts#L80 with something like

var lastChar = basePath.substr(-1);
if (lastChar === '/') {
  basePath = basePath.substring(0, basePath.length - 1);
}

(yes, there is probably a more elegant solution. But these lines I did test)

ludwigprager avatar Aug 21 '23 00:08 ludwigprager

The probelm seems to be that LocalAI allows traffic on http://localhost:8080 but not on http://127.0.0.1:8080

When configuring the ui, http://localhost:8080/v1 can be configured, but by the time the request is made, it has been transformed to http://127.0.0.1:8080/v1 which is not allowed.

There should be a configuration for allowing other domains to be used to access LocalAI. If there is such a setting or environment variable already, could somebody please point that out.

mholtzhausen avatar Aug 27 '23 22:08 mholtzhausen

The probelm seems to be that LocalAI allows traffic on http://localhost:8080 but not on http://127.0.0.1:8080

When configuring the ui, http://localhost:8080/v1 can be configured, but by the time the request is made, it has been transformed to http://127.0.0.1:8080/v1 which is not allowed.

There should be a configuration for allowing other domains to be used to access LocalAI. If there is such a setting or environment variable already, could somebody please point that out.

Actually this is completely false. My two services were running in different containers, and the front-end was calling its own backend and then proxying that call to the LocalAI api. Inside the docker container, localhost means itself, and not the host. So @OrsonAround was spot on with his advice -- I should be using http://host.docker.internal:8080 as my host

Still doesn't work for some reason, but that is unrelated to the issue described here.

BONUS: there is a config for allowing other domains on LocalAi, which defaults to * and is plain as day in the .env file:

CORS_ALLOW_ORIGINS=*

mholtzhausen avatar Aug 28 '23 09:08 mholtzhausen

I was able to resolve this issue by using 'http://host.docker.internal:8080/v1" as the Base Path for the ChatLocalAI module in Flowise.

absolutely the solution!

arberrexhepi avatar Oct 11 '23 00:10 arberrexhepi

where can I find the connect credentials for ChatLocalAI node in Flowise?

ss1411 avatar Mar 11 '24 10:03 ss1411