OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

Document LMStudio usage

Open Niche-Apps opened this issue 10 months ago • 62 comments

Describe the bug Trouble connecting to LMStudio

Steps to Reproduce 1.Start server on LMStudio 2.Start frontend and backend on OpenDevin 3.

Expected behavior OpenDevin asks what I want it to build

Actual behavior

Additional context

OpenDevin does nothing and I get this error in LMStudio:

[2024-03-30 19:21:30.994] [ERROR] Unexpected endpoint or method. (GET /litellm-models). Returning 200 anyway

Niche-Apps avatar Mar 31 '24 00:03 Niche-Apps

/litellm-models is not how you call the models, you need to set the URL as LM Studio exposes it. Check its documentation on how to call the model you want to use.

I'm not familiar with LM Studio, FWIW some of its setup has been discussed in this issue. The last few comments seem to point to a solution.

enyst avatar Mar 31 '24 02:03 enyst

@Niche-Apps This is only somewhat related but FYI: I am running Mistral 7B locally using jan.ai and this is what I use in my config.toml. Notice the openai/ prefix. It is required as per the LiteLLM docs for OpenAI compatible endpoints.

LLM_BASE_URL="http://localhost:1337/v1"
LLM_API_KEY="EMPTY"
LLM_MODEL="openai/mistral-ins-7b-q4"

hchris1 avatar Mar 31 '24 14:03 hchris1

@Niche-Apps something is misconfigured. The frontend is reaching out to your LMStudio server on port 3000, and not reaching the backend (which is expected to be running on 3000.

Did the backend start successfully on 3000?

rbren avatar Mar 31 '24 15:03 rbren

I'd like to understand how to use LMStudio as well.

ajeema avatar Mar 31 '24 18:03 ajeema

try the following setting for LM Studio:

LLM_API_KEY="lm-studio" LLM_MODEL="openai/mistral" //leave openai as is... you can change mistral to the local model you use LLM_BASE_URL="http://localhost:1234/v1" LLM_EMBEDDING_MODEL="local"

mikeaper323 avatar Mar 31 '24 19:03 mikeaper323

I got the same problem, with ollama and lmstudio

for lm studio i tried LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

for ollama

LLM_BASE_URL="localhost:11434" LLM_MODEL= "openai/mistral" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

logs.txt

does anyone know a solution please?

stratte89 avatar Mar 31 '24 23:03 stratte89

Try this setting for LM studio:

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

mikeaper323 avatar Apr 01 '24 00:04 mikeaper323

hey, many thanks for your quick response, i just tried it and got this error. does it matter what model i choose in Devin because there is no dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf

Oops. Something went wrong: python3.11/site-packages/openai/_base_client.py", line 960, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

[2024-03-31 23:53:27.433] [INFO] [LM STUDIO SERVER] Stopping server.. [2024-03-31 23:53:27.445] [INFO] [LM STUDIO SERVER] Server stopped [2024-03-31 23:53:29.041] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED [2024-03-31 23:53:29.041] [INFO] [LM STUDIO SERVER] Heads up: you've enabled CORS. Make sure you understand the implications [2024-03-31 23:53:29.072] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234 [2024-03-31 23:53:29.072] [INFO] [LM STUDIO SERVER] Supported endpoints: [2024-03-31 23:53:29.073] [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models [2024-03-31 23:53:29.074] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions [2024-03-31 23:53:29.075] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions [2024-03-31 23:53:29.075] [INFO] [LM STUDIO SERVER] Model loaded: TheBloke/dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf [2024-03-31 23:53:29.076] [INFO] [LM STUDIO SERVER] Logs are saved into C:\tmp\lmstudio-server-log.txt

stratte89 avatar Apr 01 '24 00:04 stratte89

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace

mikeaper323 avatar Apr 01 '24 00:04 mikeaper323

unfortunately still the same error :c

stratte89 avatar Apr 01 '24 00:04 stratte89

@stratte89 Are you using wsl in Windows, and running LMStudio in Windows? If yes: https://github.com/OpenDevin/OpenDevin/issues/435#issuecomment-2028907533

jay-c88 avatar Apr 01 '24 00:04 jay-c88

@stratte89 Are you using wsl in Windows, and running LMStudio in Windows? If yes: #435 (comment)

Awesome! Many thanks, I will try it 👍

stratte89 avatar Apr 01 '24 00:04 stratte89

Yes. WSL with windows, LM studios with windows. Conda powershell env. Follow all the project instructions.

mikeaper323 avatar Apr 01 '24 00:04 mikeaper323

Yes. WSL with windows, LM studios with windows. Conda powershell env. Follow all the project instructions.

oh, i set devin up in the wsl ubuntu in windows with no conda powershell env. I guess i have to reinstall everything then?

stratte89 avatar Apr 01 '24 00:04 stratte89

You don't 'need' a conda environment (except that you are just littering your wsl environment and maybe cause dependency issues for other projects ^^). If you installed it in your base wsl environment, it should still run fine.

jay-c88 avatar Apr 01 '24 01:04 jay-c88

You don't 'need' a conda environment (except that you are just littering your wsl environment and maybe cause dependency issues for other projects ^^). If you installed it in your base wsl environment, it should still run fine.

I reated a new file like this and restarted the computer, then i reinstalled devin in an conda env. "Open wsl config file C:\Users%username%.wslconfig (create one if it doesnt exist), and add this:

[wsl2] networkingMode=mirrored"

I use LLM_API_KEY="lmstudio" LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

but i still get this APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

the only thing i didnt do was "Then restart wsl completely (exit docker and run wsl --shutdown), then restart everything." because i didn't know how thats why i restarted the pc and reinstalled devin.

stratte89 avatar Apr 01 '24 01:04 stratte89

Make sure the api key is: LLM_API_KEY="lm-studio" Not LLM_API_KEY="lmstudio" And make sure you haven't changed the port in lm studio to another port. The only other thing I can think of is the another model.

mikeaper323 avatar Apr 01 '24 02:04 mikeaper323

Oh and maybe run prompt with administrator privileges, but I don't think that would matter

mikeaper323 avatar Apr 01 '24 02:04 mikeaper323

but i still get this APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

Seems that your opendevin is still not able to find/connect to your LMStudio server.

I reated a new file like this and restarted the computer, then i reinstalled devin in an conda env. "Open wsl config file C:\Users%username%.wslconfig (create one if it doesnt exist), and add this:

make sure the .wslconfig file is actually in your windows userprofile folder, it could be a different location for you. type %UserProfile% in explorer address bar to confirm the file is inside.

jay-c88 avatar Apr 01 '24 02:04 jay-c88

Oh and maybe run prompt with administrator privileges, but I don't think that would matter

i changed the config.toml and used admin rights already but still, i tried both of them

uvicorn opendevin.server.listen:app --port 3000 npm start

or uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0 npm run start -- --host

both the same thing. do i need to change a different wslconfig for the ubuntu terminal?

image

btw. gpt (open ai api key) is working fine

stratte89 avatar Apr 01 '24 02:04 stratte89

Yeah there is absolutely nothing arriving at LMStudio if you look at lmstudio-server-log.txt.

both the same thing. do i need to change a different wslconfig for the ubuntu terminal?

The network mirror configuration has to be the global config file %UserProfile%\.wslconfig for WSL to be able to access the host's localhost.

jay-c88 avatar Apr 01 '24 02:04 jay-c88

Yeah there is absolutely nothing arriving at LMStudio if you look at lmstudio-server-log.txt.

both the same thing. do i need to change a different wslconfig for the ubuntu terminal?

The network mirror configuration has to be the global config file %UserProfile%\.wslconfig for WSL to be able to access the host's localhost.

well i did that, i created a new file .wslconfig and pasted the code inside

image

Update, i installed ubuntu on virtual box and installed devin, i set it up as network bridge. now i get a connection, but still an error

(base) stratte@stratte-VirtualBox:~/Desktop$ curl http://192.168.178.20:1234 {"error":"Unexpected endpoint or method. (GET /)"}(base) stratte@stratte-VirtualBox:~/Desktop$ telnet 192.168.178.20 1234 telnet 192.168.178.20 1234 Trying 192.168.178.20... Connected to 192.168.178.20. Escape character is '^]'. HTTP/1.1 408 Request Timeout Connection: close Connection closed by foreign host.

Oops. Something went wrong: OpenAIException - Error code: 400 - {'error': '<LM Studio error> Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}

File "/home/stratte/.local/share/virtualenvs/OpenDevin-main-2ejNtS9k/lib/python3.11/site-packages/litellm/llms/openai.py", line 382, in completion

raise OpenAIError(status_code=e.status_code, message=str(e))

litellm.llms.openai.OpenAIError: Error code: 400 - {'error': '<LM Studio error> Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

when i use

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234" #without /v1 LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

then i get this error

raise Exception(f"Invalid response object {traceback.format_exc()}")

Exception: Invalid response object Traceback (most recent call last):

  File "/home/stratte/.local/share/virtualenvs/OpenDevin-main-2ejNtS9k/lib/python3.11/site-packages/litellm/utils.py", line 6585, in convert_to_model_response_object

    for idx, choice in enumerate(response_object["choices"]):

                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

TypeError: 'NoneType' object is not iterable

LM Studio showed this error

Unexpected endpoint or method. (POST /chat/completions). Returning 200 anyway

stratte89 avatar Apr 01 '24 03:04 stratte89

I'm sorry your still having issues getting this LM studio to connect. I'll try to be as specific as possible, that worked for me. This is what I did, you don't have to do exactly what I did, i'm guessing there are other ways around it:

This is for windows:

step 1:

download anaconda latest version and install it with default setting. (I added conda to my windows enviroment path, and i choose to use it for my default Python, however you shouldn't do this if conda won't be your go to prompt or powershell).

step 2:

download docker latest version and install it with default settings, I have docker autostart when windows starts up, you don't have to do this, but if you don't make sure you manually start docker.

step 3:

download latest node.js and install the latest version.

step 4:

Open windows PowerShell as admin and run command: wsl --install restart computer Open windows PowerShell as admin and run command: wsl -l -v make sure WSL 2 is being used. if WSL 2 is not being used run command in the same PowerShell: wsl --set-default-version 2 restart computer and try command again: wsl -l -v if for some reason at this point you are unable to run WLS download it from the microsoft store, like Ubuntu (thats the one I use). at this point you should have a linux with WSL on windows.

step 5:

open anaconda powershell with admin (you can find this with the search icon in windows) run command: conda create -n devin python=3.11 (this would create a conda environment called devin, with python 3.11) run command: conda activate devin (this will activate the devin environment) next cd into your desired directory you wish to install devin run command: git clone https://github.com/OpenDevin/OpenDevin.git (I'm sure you already have this, but make sure its the latest version) next run command: docker pull ghcr.io/opendevin/sandbox (this would install the opendevin image for docker) cd into opendevin folder and from this point follow the open devin instructions:

Then copy config.toml.template to config.toml. Add an OpenAI API key to config.toml, or see below for how to use different models.

LLM_API_KEY="sk-..." Next, start the backend:

python -m pip install pipenv python -m pipenv install -v python -m pipenv shell

step 6: deploy your desired model from LM Studio.

step 7: edit config.toml as follows:

LLM_API_KEY="insert claude api key here if you like to use claude and uncomment"

LLM_MODEL="claude-3-haiku-20240307"

LLM_API_KEY="insert openAI key here if you like to use openAI and uncomment"

LLM_MODEL="gpt-3.5-turbo"

this section is for LM Studio only, its already uncommented:

LLM_API_KEY="lm-studio" LLM_MODEL="openai/deepseek-coder-6.7B-instruct-GGUF" #you can change this to any model you like, just keep the openai/ LLM_BASE_URL="http://localhost:1234/v1" LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local" WORKSPACE_DIR="./workspace"

step 8: go back to the conda powershell with both enviroments active and run: uvicorn opendevin.server.listen:app --port 3000

step 9: continue opendevin instructions, no need to have environments active in this section, since this uses node.js: In a second terminal, start the frontend:

cd frontend npm install npm start

............................................................

by following those steps, it should work, if at this point it doesn't, I wouldn't know how to help you. Maybe fresh install everything. Keep in mind that openDevin is still a new project, so most of this local models don't work well. GOOD LUCK

mikeaper323 avatar Apr 01 '24 06:04 mikeaper323

thank you for all the details, i will reinstall it using the anaconda terminal, i was using the ubuntu terminal, i don't know if that makes a big difference tho. I did all the steps you said tho. And since its working with a open ai api key i doubt that its the installation, its more about the connection between ls-studio and devin.

I installed Ubuntu in an virtual machine using a network bridge and my localip:1234/v1 and in the terminal i get a connection between the vm and the lm studio api. i use uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0 npm run start -- --host

(base) stratte@stratte-VirtualBox:~/Desktop$ curl -v http://192.168.178.20:1234

  • Trying 192.168.178.20:1234...
  • Connected to 192.168.178.20 (192.168.178.20) port 1234 (#0)

GET / HTTP/1.1 Host: 192.168.178.20:1234 User-Agent: curl/7.81.0 Accept: /

  • Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < X-Powered-By: Express < Access-Control-Allow-Origin: * < Access-Control-Allow-Headers: * < Content-Type: application/json; charset=utf-8 < Content-Length: 50 < ETag: W/"32-e6rgb5BPJ+PUVQosDi3B/Ob1epE" < Date: Mon, 01 Apr 2024 07:19:03 GMT < Connection: keep-alive < Keep-Alive: timeout=5
  • Connection #0 to host 192.168.178.20 left intact {"error":"Unexpected endpoint or method. (GET /)"}(base) stratte@stratte-VirtualBox:~/Desktop$

EDIT: I fixed it, well we did! For everybody who faces a simmular problem, try this config, it works for me now. In my case I am using Windows 10 + Oracle Virtual Box Ubuntu, Devin runs in Ubuntu and LM-Studio on Windows.

LLM_API_KEY="na" LLM_BASE_URL="actual local ip of your host pc:1234/v1" #check ipconfig in a cmd LLM_MODEL="openai/deepseek-coder-6.7B-instruct-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

EDIT 2: I managed to make it work on windows as well now by using this

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234/v1" #local ip LLM_MODEL="openai/deepseek-coder-6.7B-instruct-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

EDIT: Nevermind... i mean i am connected to LM-Studio somehow but now i get this error

litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'error': '<LM Studio error> Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}

    OBSERVATION:
    OpenAIException - Error code: 400 - {'error': '<LM Studio error> Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}
    

LM-Studio 024-04-01 11:31:18.999] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: ' You're a thoughtful robot. Your main task is to testapp. Don't expand the scope of your task--just ... (truncated in these logs)' } (total messages = 1) [2024-04-01 11:31:19.019] [ERROR] Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a [2024-04-01 11:31:19.110] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-04-01 11:31:19.111] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [

...and only with JSON.\n\n\n", "role": "user" } ], "model": "Deepseek-Coder-6.7B-Instruct-GGUF" } [2024-04-01 11:31:19.118] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: ' You're a thoughtful robot. Your main task is to testapp. Don't expand the scope of your task--just ... (truncated in these logs)' } (total messages = 1)

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234/v1" LLM_MODEL="openai/Deepseek-Coder-6.7B-Instruct-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

stratte89 avatar Apr 01 '24 07:04 stratte89

I just discovered my filter stopped all the replies. I do seem to have a problem with the backend. I’m on Mac and got this:

opendevin % uvicorn opendevin.server.listen:app --port 3000 Traceback (most recent call last): File "/Users/josephsee/anaconda3/bin/uvicorn", line 8, in sys.exit(main()) ^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 1157, in call return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/main.py", line 418, in main run( File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/main.py", line 587, in run server.run() File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/server.py", line 62, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve config.load() File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/config.py", line 458, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/importer.py", line 24, in import_from_string raise exc from None File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1204, in _gcd_import File "", line 1176, in _find_and_load File "", line 1147, in _find_and_load_unlocked File "", line 690, in _load_unlocked File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "/Users/josephsee/OpenDevin/opendevin/server/listen.py", line 1, in from opendevin.server.session import Session File "/Users/josephsee/OpenDevin/opendevin/server/session.py", line 14, in from opendevin.controller import AgentController File "/Users/josephsee/OpenDevin/opendevin/controller/init.py", line 1, in from .agent_controller import AgentController File "/Users/josephsee/OpenDevin/opendevin/controller/agent_controller.py", line 22, in from .command_manager import CommandManager File "/Users/josephsee/OpenDevin/opendevin/controller/command_manager.py", line 4, in from opendevin.sandbox.sandbox import DockerInteractive File "/Users/josephsee/OpenDevin/opendevin/sandbox/sandbox.py", line 10, in import docker ModuleNotFoundError: No module named 'docker'

On Mar 31, 2024, at 10:14 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps something is misconfigured. The frontend is reaching out to your LMStudio server on port 3000, and not reaching the backend (which is expected to be running on 3000.

Did the backend start successfully on 3000?

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2028794138, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4PXKIIO3FIQ65U7UTY3AR5XAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRYG44TIMJTHA. You are receiving this because you were mentioned.

Niche-Apps avatar Apr 01 '24 13:04 Niche-Apps

@Niche-Apps looks like you need to redo the pipenv setup--that should install docker

rbren avatar Apr 01 '24 14:04 rbren

Now I get this in the backend. Is there a setting I need to change in the config file?

Retrying llama_index.embeddings.openai.base.get_embeddings in 0.8089781787046612 seconds as it raised APIConnectionError: Connection error..

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

Niche-Apps avatar Apr 01 '24 18:04 Niche-Apps

Here is what I have but I thought the critical part was the llm_base_url

LLM_BASE_URL="https://localhost:1234/v1"

LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

custom_llm_provider="openai"

WORKSPACE_DIR="./workspace"

LLM_MODEL="openai-GPT-4"

LLM_API_KEY="your-api-key"

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

Niche-Apps avatar Apr 01 '24 18:04 Niche-Apps

Ok it doesn’t like the llm provider setting. I got about 20 of this.

Oops. Something went wrong: Error condensing thoughts: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openai-GPT-4 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

Niche-Apps avatar Apr 01 '24 18:04 Niche-Apps

I changed my settings to this and still got the same message.

LLM_BASE_URL="https://localhost:1234/v1"

LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

custom_llm_provider="completion(model='bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf',)"

WORKSPACE_DIR="./workspace"

LLM_MODEL="bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf"

LLM_API_KEY="your-api-key"

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

Niche-Apps avatar Apr 01 '24 18:04 Niche-Apps