continue
continue copied to clipboard
Cannot connect to ollama server on remote
Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that reports the same bug
- [X] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Archlinux
- Continue: 0.8.24
- IDE: vscode 1.88.1
- Model: Ollama 0.1.31
Description
I have a laptop ("butterfly"), where ollama serve
is installed and models are pulled, but my project is remote. I configured tab completions with ollama as suggested (config in butterfly):
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "starcoder2:3b",
"apiBase": "http://butterfly:11434",
"num_thread": 1
},
and yet, I get failures connecting to http://butterfly:11434 - which I assume is happening from the remote since that's where the code lives. But from the remote this works:
curl http://butterfly:11434/api/generate -d '{ "model": "starcoder2:3b", "prompt": "if x ==" }'
{"model":"starcoder2:3b","created_at":"2024-04-17T12:26:12.424072856Z","response":" ","done":false}
{"model":"starcoder2:3b","created_at":"2024-04-17T12:26:12.439451787Z","response":"3","done":false}
...
Unfortunately I haven't found a way to see debug logs for continue.
To reproduce
No response
Log output
console.ts:137 [Extension Host] Error generating autocompletion: FetchError: request to http://butterfly:11434/api/generate failed, reason: connect EHOSTUNREACH 192.168.178.71:11434
at ClientRequest.<anonymous> (/home/pmatos/.vscode-server/extensions/continue.continue-0.8.24-linux-arm64/out/extension.js:25975:14)
at ClientRequest.emit (node:events:529:35)
at Socket.socketErrorListener (node:_http_client:501:9)
at Socket.emit (node:events:517:28)
at emitErrorNT (node:internal/streams/destroy:151:8)
at emitErrorCloseNT (node:internal/streams/destroy:116:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
I'm having a very similar problem, but with IntelliJ: https://github.com/continuedev/continue/issues/1136.
+1
+1
which I assume is happening from the remote since that's where the code lives. But from the remote this works:
@pmatos I want to make sure I understand what you're referring to as the remote. Does this mean that you are using Remote SSH in VS Code? If so, Continue runs by default as a UI extension, so it runs on your laptop rather than in the remote with the code. So in this case you do not need to connect to butterfly, but just to localhost.
@readmodifywrite @gokulkgm Can you share more about what exactly your setups look like? Given that I wasn't perfectly clear on the original issue's set up I want to make sure there aren't different details about your situations. Looking to debug this as soon as I have more information!
@sestinj I had an issue connecting to ollama host (forgot what was the exact error).
Then on an another issue it was mentioned to use pre-release version of the extension. With that I was able to connect to ollama api.
which I assume is happening from the remote since that's where the code lives. But from the remote this works:
@pmatos I want to make sure I understand what you're referring to as the remote. Does this mean that you are using Remote SSH in VS Code? If so, Continue runs by default as a UI extension, so it runs on your laptop rather than in the remote with the code. So in this case you do not need to connect to butterfly, but just to localhost.
I see - that was not my understanding. My understanding was that when you're in a remote project, extensions are installed on the remote.
+1
0.8.27 fixed this for me...previously anything newer than 0.8.23 would not work as I outlined here -> https://github.com/continuedev/continue/issues/1215#issuecomment-2093801770
@xndpxs @pmatos is this solved now for y'all in 0.8.27 as for ahoplock?
Also just for reference of what I was mentioning about running on local vs remote: https://code.visualstudio.com/api/advanced-topics/remote-extensions
This honestly is a point of discussion right now, so if you have strong feelings about where the extension ought to run I'm open to hearing! Primary concern with moving to remote is that if you wanted to run an LLM locally, you'd have to go through extra trouble to expose your machine's localhost to the remote server
@sestinj Yes, In my case it was a ollama variable problem.
I looked with
OLLAMA_SERVER=ip:port ollama list
And found just 1 model, so I procced to install the other like this:
OLLAMA_SERVER=ip:port ollama install starcoder2
And it worked
@xndpxs @pmatos is this solved now for y'all in 0.8.27 as for ahoplock?
Also just for reference of what I was mentioning about running on local vs remote: https://code.visualstudio.com/api/advanced-topics/remote-extensions
This honestly is a point of discussion right now, so if you have strong feelings about where the extension ought to run I'm open to hearing! Primary concern with moving to remote is that if you wanted to run an LLM locally, you'd have to go through extra trouble to expose your machine's localhost to the remote server
It is still not working here. It feels like it's trying to run the extension remotely. Because I have for sure local ollama running and with:
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model - Starcoder2:3b",
"provider": "ollama",
"model": "starcoder2:3b",
"apiBase": "https://localhost:11434"
},
"tabAutocompleteOptions": {
"useSuffix": true,
"useCache": true,
"multilineCompletions": "never",
"useOtherFiles": true
},
I keep getting:
FetchError:request to https://127.0.0.1:11434/api/generate failed, reason: connect ECONNREFUSED
If I try a curl request on the command line, it works:
~ curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?"
}'
{"model":"llama3","created_at":"2024-05-11T06:58:04.886581958Z","response":"What","done":false}
{"model":"llama3","created_at":"2024-05-11T06:58:04.917646086Z","response":" a","done":false}
{"model":"llama3","created_at":"2024-05-11T06:58:04.948962271Z","response":" great","done":false}
{"model":"llama3","created_at":"2024-05-11T06:58:04.98032007Z","response":" question","done":false}
...
Here's a screenshot of what I see as a problem:
If I don't even setup the apibase, i.e. localhost:11343 for ollama, it offers me to download ollama. But I don't need that because it's already running as you can see in the terminal. It feels there's some confusion between what's running locally and remotely.
which I assume is happening from the remote since that's where the code lives. But from the remote this works:
@pmatos I want to make sure I understand what you're referring to as the remote. Does this mean that you are using Remote SSH in VS Code? If so, Continue runs by default as a UI extension, so it runs on your laptop rather than in the remote with the code. So in this case you do not need to connect to butterfly, but just to localhost.
I don't understand how this is true. I just did some experimentation. If I remove Continue from the remote, it doesn't even show the continue button on the sidebar when I have my remote project opened. Once I do install it in the remote then, the config read is the config.json file in the remote. And as shown above all points to the fact that the extension is trying to run ollama server on the remote - not locally.
@pmatos It does sound like you've installed in the remote. In that case you'll just need to find a way to forward the Ollama endpoint
Also (and this is less relevant assuming you have installed in remote) I notice an https:// instead of http://—does changing this have any effect? localhost should generally use http
@pmatos It does sound like you've installed in the remote. In that case you'll just need to find a way to forward the Ollama endpoint
Also (and this is less relevant assuming you have installed in remote) I notice an https:// instead of http://—does changing this have any effect? localhost should generally use http
Now you are saying it have installed in the remote. To be honest I am confused. At some point it was said it always runs locally even in remote projects but as mentioned I don't think that's true.
The remote is a tiny board where I don't want to run llms. So if continue runs on the remote, I will have to find a way for ollama to run locally on my pc. I will do more testing tomorrow.
Yes, sorry for the confusion. What I think I said that was misleading before was this. It "by default" runs locally, but it is also possible to install in remote. And perhaps something even more odd is happening and it installed by default in remote for you. That last part I am unsure of.
Yes, sorry for the confusion. What I think I said that was misleading before was this. It "by default" runs locally, but it is also possible to install in remote. And perhaps something even more odd is happening and it installed by default in remote for you. That last part I am unsure of.
Ah yes, that makes sense. Maybe I misunderstood. Apologies from my side as well. :)