continue
continue copied to clipboard
can not be used in local area network
Describe the bug In an environment where the local area network cannot connect to the internet, even if the local code llama service is configured in config.py, it still cannot be used normally. Are there any features that must access the internet, and is it possible to localize the entire system?
Environment
- Operating System: [e.g. MacOS]
- Win 10
- Python Version: [e.g. 3.10.6]
- Continue Version: [e.g. v0.0.207]
- 0.0.383 Logs
Yes, this should be possible. I've worked with 2 other people who were able to accomplish this by downloading the server and running it manually. You can do this by pip install continuedev && python -m continuedev
. Then in VS Code settings, you'll want to search "continue" and check the box for "Manually Running Server".
Once the Continue Python server is running, things should work. Tomorrow I will do a nicer write up of this in our documentation
@sestinj In fact, I have already downloaded the run.exe file within the LAN and manually started it. The server_version.txt file has also been written with the corresponding version 0.0.383. The GGML content is configured as follows:
models=Models(
default=GGML(
max_context_length=2048,
server_url="http://10.97.40.91:12345"
)
)
-
Question 1: After starting vscode, it still automatically deletes run.exe and re-downloads it.
-
Question 2: After manually running run.exe, the "continue server starting" took a long time, and the continue backend service log can be printed normally. But the code llama service did not receive a request, it seems to be blocked in some internet request step.
-
Question 3: Is there a step to access internet services? If so, machines that cannot access the internet in the LAN cannot use continue. Theoretically, non-core functions should not affect use, and the core is the local continue and the local code llama large model.
- Suggestion: So please consider doing a comprehensive test for machines that cannot access the internet? If there are requests in the middle link, it cannot be used for local LAN machines either.
-
Question 4: The request message currently cannot be viewed with fidder, probably because this tool does not support the system's proxy service?
- Suggestion: What about add support for going through a proxy server to facilitate packet capture and troubleshooting?
-
Continue.log:
INFO: ('127.0.0.1', 55962) - "WebSocket /ide/ws" [accepted]
[2023-09-13 19:05:34,237] [DEBUG] Accepted websocket connection from Address(host='127.0.0.1', port=55962)
INFO: connection open
[2023-09-13 19:05:34,283] [DEBUG] Received message while initializing workspaceDirectory
[2023-09-13 19:05:34,284] [DEBUG] Received message while initializing uniqueId
[2023-09-13 19:05:35,235] [DEBUG] New session: None
[2023-09-13 19:05:35,480] [DEBUG] Loaded Continue config file from C:\Users\lilizs\.continue\config.py
Failed to capture event: No internet connection
[2023-09-13 19:05:54,040] [DEBUG] Starting context manager
[2023-09-13 19:05:54,043] [WARNING] Failed to load saved_context_groups.json: Expecting value: line 1 column 1 (char 0). Reverting to empty list.
[2023-09-13 19:06:14,130] [DEBUG] Sending session id: e5319b08-849a-4cc0-ab9c-ef2a853565ca
Connecting
[2023-09-13 19:06:14,137] [WARNING] Meilisearch is not running.
[2023-09-13 19:06:14,138] [DEBUG] Starting MeiliSearch...
[2023-09-13 19:06:14,170] [DEBUG] Received websocket connection at url: ws://localhost:65432/gui/ws?session_id=e5319b08-849a-4cc0-ab9c-ef2a853565ca
INFO: ('127.0.0.1', 56039) - "WebSocket /gui/ws?session_id=e5319b08-849a-4cc0-ab9c-ef2a853565ca" [accepted]
[2023-09-13 19:06:14,172] [DEBUG] Session started
[2023-09-13 19:06:14,172] [DEBUG] Registered websocket for session e5319b08-849a-4cc0-ab9c-ef2a853565ca
Connected
INFO: connection open
connection handler failed
Traceback (most recent call last):
File "asyncio\tasks.py", line 234, in __step
File "websockets\legacy\protocol.py", line 959, in transfer_data
File "websockets\legacy\protocol.py", line 1029, in read_message
File "websockets\legacy\protocol.py", line 1104, in read_data_frame
File "websockets\legacy\protocol.py", line 1161, in read_frame
File "websockets\legacy\framing.py", line 68, in read
File "asyncio\streams.py", line 708, in readexactly
File "asyncio\streams.py", line 501, in _wait_for_data
File "asyncio\futures.py", line 285, in __await__
File "asyncio\tasks.py", line 304, in __wakeup
File "asyncio\futures.py", line 196, in result
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "websockets\legacy\server.py", line 240, in handler
File "pylsp\python_lsp.py", line 135, in pylsp_ws
File "websockets\legacy\protocol.py", line 497, in __aiter__
File "websockets\legacy\protocol.py", line 568, in recv
File "websockets\legacy\protocol.py", line 944, in ensure_open
websockets.exceptions.ConnectionClosedError: sent 1011 (unexpected error) keepalive ping timeout; no close frame received
[2023-09-13 19:07:03,852] [DEBUG] Received GUI message {"messageType":"main_input","data":{"input":"/edit write python helloworld program"}}
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection。', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Error importing tiktoken HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'The remote host forcibly closed an existing connection', None, 10054, None)))
Question 2: After manually running run.exe, the "continue server starting" took a long time, and the continue backend service log can be printed normally. But the code llama service did not receive a request, it seems to be blocked in some internet request step.
I had a similar issue and had to disable telemetry in config.py
@star7810 TL;DR: kohlivarun5 is right, once server is up and telemetry disabled things should work. I'm making a couple changes on my end and will have a full write-up to share soon.
- At the risk of asking about something you've already done...is the checkbox selected as in this screenshot? If yes and it's still trying to redownload, I have some serious sanity checking to do. I've tried on my computer with this box checked, even with server_version.txt and the binary removed, and it does not attempt to kill/redownload the server
-
Running the binary on windows can sometimes be slow to start, since it has to unpackage the contents of a zipped directory. Using the PyPI package (
python -m continuedev
) would be faster to startup after initial pip install, and might be more natural to start/stop the server. -
This test has been done on air-gapped computers, but it takes a few adjustments, as kohlivarun5 mentions above (set allow_anonymous_telemetry=False in config.py). I'm working on a full write-up of how to do this, I'll share soon
-
We already have proxy support if you use the
OpenAI
class, but I've added support for GGML just now, version on its way out the door. Will work like here by settingGGML(..., proxy="<MY_PROXY_URL>")
.
- The tiktoken error is okay (we fallback to an alternative that doesn't require downloading the token vocabulary to count tokens), but I've made a much needed change so the warning isn't repeated more than once
Here is some new documentation describing the steps you should take to run Continue without internet: https://continue.dev/docs/walkthroughs/running-continue-without-internet
@sestinj Great, it can already be sent to the code llama backend service, but there is a problem with the returned message. Has anyone else encountered a similar problem?
- The intercepted message, that is, the request content sent to code llama is as follows:
{"prompt": ".jsqlparser.JSQLParserException;\r\nimport net.sf.jsqlparser.parser.CCJSqlParserUtil;\r\nimport net.sf.jsqlparser.statement.Statement;\r\n\r\n/**\r\n * @author xzf\r\n * @date 2022/8/25 17:42\r\n */\r\npublic class SqlParserTest {\r\n\r\n public static void main(String[] args){\r\n\r\n// String sql = "merge into T_FW_USER_RELATIONSHIP a using ( select 70769 as L_REL_ID,'ROLE' as S_REL_TYPE from dual union all select 4917 as L_REL_ID,'ROLE' as S_REL_TYPE from dual union all select 74917 as L_REL_ID,'ROLE' as S_REL_TYPE from dual union all select 48262 as L_REL_ID,'ROLE' as S_REL_TYPE from dual union all select 75679 as L_REL_ID,'ROLE' as S_REL_TYPE from dual union all select 10356 as L_REL_ID,'GROUP' as S_REL_TYPE from dual union all select 10325 as L_REL_ID,'GROUP' as S_REL_TYPE from dual union all select 73960 as L_REL_ID,'GROUP' as S_REL_TYPE from dual union all select 10355 as L_REL_ID,'GROUP' as S_REL_TYPE from dual union all select 74316 as L_REL_ID,'GROUP' as S_REL_TYPE from dual union all select 70106 as L_REL_ID,'GROUP' as S_REL_TYPE from dual union all select 73927 as L_REL_ID,'GROUP' as S_REL_TYPE from dual union all select 74320 as L_REL_ID,'GROUP' as S_REL_TYPE from dual )b on (a.L_STATUS=1 and a.S_REL_TYPE = b.S_REL_TYPE and a.L_REL_ID = b.L_REL_ID and a.S_ISLEADER ='N' and a.S_USERID = '0022zjk') when not matched then insert (L_ID,S_USERID,S_REL_TYPE,L_REL_ID,S_ISLEADER,L_STATUS,S_CREATOR,T_CREATE_TIME,S_OPERATE_INFO) values(SEQ_ORG.NEXTVAL,'0022zjk',b.S_REL_TYPE,b.L_REL_ID,'N',1,'admin',to_date('20220825171436','yyyyMMddHH24miss'),'')";\r\n\r\n String jSql = "insert into t_test_business_remark(id,name) values(MY_SEQ.NEXTVAL,'ASD')";\r\n try {\r\n Statement parse = CCJSqlParserUtil.parse(jSql);\r\n } catch (JSQLParserException e) {\r\n e.printStackTrace();\r\n }\r\n\r\n }\r\n}\r\n```\nEdit the code to perfectly satisfy the following user request:\nparse to python\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]", "stream": true, "max_tokens": 1024, "temperature": 0.5, "model": "ggml"}
- response:
{"choices":[{"text":"Error: special tags are not allowed as part of the prompt."}]}
@sestinj It seems that the example code has been deleted? https://github.com/continuedev/ggml-server-example
That's odd. I can still see the repo with example code.
There's a small chance that this is just the output of the LLM, or has this exact same response come back more than once? If so, it could be that the [INST] tags aren't allowed for some reason, but this would be new to llama-cpp-python
Ah I see what you mean about the example code. There was never any python file there, because it just runs a pip package
Ah I see what you mean about the example code. There was never any python file there, because it just runs a pip package
haha we try again...
Here's another suggestion: Currently, it's hard to identify the cause of an error from the interface when the model service returns a non-200 response. It would be helpful to add frontend notifications for responses with non-200 status codes.
This is a good point, something I can do for sure.
I went through the README and set this up again, seems it should still work, except that llama-cpp-python really doesn't seem to like concurrent requests. My config file looked like this and everything was working smoothly:
from continuedev.src.continuedev.libs.llm.ggml import GGML
from continuedev.src.continuedev.libs.llm.queued import QueuedLLM
...
config = ContinueConfig(
...
models=Models(
default=QueuedLLM(
llm=GGML(
context_length=2048, server_url="http://localhost:8000"
)
),
),
disable_summaries=True,
)
The QueuedLLM wrapper makes sure that only one request happens at once, which unfortunately is necessary if working with llama-cpp-python it seems. And disable_summaries is optional, but if you're going to only allow one at a time, it doesn't make sense to force yourself to wait for the summary to be generated.
@sestinj We have attempted several times according to the instructions at https://github.com/continuedev/ggml-server-example
See our 5 minute quickstart to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline.
However, we encountered an exception error during startup. Do you have any ideas on this? The sha256 hash of the model file is correct. The error message is as follows:
(ggml) [lzb@VKF-NLP-GPU-01 ggml-server-example]$
(ggml) [lzb@VKF-NLP-GPU-01 ggml-server-example]$ python3 -m llama_cpp.server --model models/wizardLM-7B.ggmlv3.q4_0.bin
gguf_init_from_file: invalid magic number 67676a74
error loading model: llama_model_loader: failed to load model from models/wizardLM-7B.ggmlv3.q4_0.bin
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/home/lzb/.conda/envs/ggml/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/lzb/.conda/envs/ggml/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/lzb/.conda/envs/ggml/lib/python3.8/site-packages/llama_cpp/server/main.py", line 96, in
app = create_app(settings=settings)
File "/home/lzb/.conda/envs/ggml/lib/python3.8/site-packages/llama_cpp/server/app.py", line 337, in create_app
llama = llama_cpp.Llama(
File "/home/lzb/.conda/envs/ggml/lib/python3.8/site-packages/llama_cpp/llama.py", line 340, in init
assert self.model is not None
AssertionError
I believe .ggml files have now been deprecated in favor of .gguf. https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/discussions/14#64e5bc015af55fb4d1f9b61d
I searched around for a .gguf for WizardLM and for some reason it doesn't seem to exist.
But there is a gguf for CodeLlama-instruct, which also works quite well: https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/tree/main
@star7810 Did you have any luck here, or is there something else I can do to get you un-stuck? If this particular path of setting up an open-source model doesn't work I can share a number of other options
Here is some new documentation describing the steps you should take to run Continue without internet: https://continue.dev/docs/walkthroughs/running-continue-without-internet
@sestinj
Is Melisearch essential for Continue to function? I noticed that that in air-gapped environment that Continue server tends to suffer from long startup time due to attempts to download Melisearch that is actually blocked and waiting for the network connection to timeout. It also don't help that the Continue server appears to delete Melisearch if it exist before redownload.
Related to it, is the design for Continue server meant to serve multiple developers? I think the setup experience is less painful if that is true. In air-gapped environment, we can then consider containerising the entire setup for Continue Server and use to serve multiple developers.
Hi @chewbh, the server is in fact meant for multiple developers. There are a few examples of people already doing this, and our team has personally been hosting a shared server on GCP Cloud Run. If this is something you're interested in doing let me know and I can share some resources on how to do it
This Meilisearch problem is still something I'd like to get to the bottom of though - could you share what version of Continue you are using (specifically the server if you are running it manually). Just yesterday I made an update that downloads Meilisearch in parallel and fails gracefully, so that it won't block the starting of the server - so there's a chance that just upgrading would solve your problem
And to answer your first question, no. Meilisearch is what allows for the dropdown menu where you can search through files in your workspace (by typing '@ file'), but all other functionality can work without it
Hi @sestinj Thanks for the prompt response. Yes, it will be very helpful if you could share some resources on the hosting of shared server.
For Melisearch, we are using a server that was from built from continuedev source dated in mid August. Let us try the current build and see if the issue is still a concern.
I'm realizing that most of what I'm going to share with you was available in the link above, but here is the best way of going about running the server.
There are a few options that you can use, including:
- --host, for example 0.0.0.0 if you want to expose the URL, 127.0.0.1 by default
- --port, by default 65432
- --meilisearch-url, if you want to set up meilisearch on your own, then you can pass something like
http://127.0.0.1:7700
here
(so you would run with python3 -m continuedev --port 1234
for example)
@chewbh @star7810 Wanted to check in on this issue. Since the last comment we've made updates to the server protocol that should help with connection reliability and I've also tested and had success myself over LAN.
Let me know if you're still struggling to get Continue setup, and I'm happy to help! I'll try to give this issue about another week before I close it if I don't hear back
@sestinj Thanks for checking in! I am able to get the base functionality working in air-gap environment. I have yet to try it with built-in context providers but am interesting to use the codesearch and file tree providers in our environment as well. Is there any caveat that I need to be aware of?
@chewbh the @search and @filetree context providers for the moment depend on having the Continue server on the same machine as the code you are editing, but shouldn't be limited by the offline scenario
@kohlivarun5 @chewbh @star7810 checking in on this issue just one more time because we've made some pretty significant and relevant changes. As of the newest VS Code pre-release, Continue no longer requires the separate Python server at all: it works as just the extension. This means that whatever connection problems were going on are pretty much not possible anymore.
Of course doesn't mean that bugs in general aren't possible, but I think it will be a better experience. If any of you get the chance to try it, let me know if you run into any problems. Otherwise I'll keep this issue open for another few days and then close it out, preferring new issues for new problems.
@sestinj I am trying out the new pre-release VS Code extension (0.7.54) but run into new connection issues. I am running it in the below environment:
- Web IDE (I try both codespace and coder/code-server)
- Proxy Server URL set to a valid url that can resolve to the express server forked by the vs code extension (e.g. https://cuddly-rotary-phone-xxqr547547f679-65433.app.github.dev/ which point to https://localhost:65433) so that it workarounds the mixed https/http content issue.
When I run any query, I still hit into CORS issue with http 401 error in the preflight request.
Working on a change that will make the proxy irrelevant and thus fix this, I'll ping you once it's ready
@chewbh ready now in version 0.7.55, I set up a GitPod workspace myself to make sure it works. No need to set the proxy url. Reason is that we are instead making requests through the built-in VS Code message passing.
The one possible scenario I haven't yet verified is if you're trying to run a local model that is on your laptop rather than inside of the GitPod workspace. But OpenAI, other APIs, anything not localhost definitely seemed to work
@sestinj Thanks for the great work! It now works in my environment.
Separately, For supporting groups or multiple users, is there plan to look at having a shared or default config.json and config.ts? We use a shared continue server previously and it is great to abstract away the configuration to hook up our internal LLM setup.
Also, with the new architecture and switch to typescript, is it still possible to add our own custom context providers? We have knowledge base in confluence and api for querying dev docs. I am interesting to look at if we could build context providers internally to enable RAG over them in Continue.
@chewbh Yes, we 100% will add this, but will be temporarily missing until we finalize what a team server might look like. One idea that may or may not fit your requirements to somewhat avoid copying around config files for now: in config.ts, you could make an HTTP request to retrieve the configuration from a very simple server of your own. This way you could update the config remotely for everyone.
Custom context providers are still possible, and probably easier now. Here is the updated documentation.
If you'd like any help transferring config over to config.ts, let me know! And I'd be curious to hear more about this RAG context provider if you look deeper into it: one thing we hope to revamp soon is the interaction pattern around context providers to make them more flexible.