goose icon indicating copy to clipboard operation
goose copied to clipboard

Goose fails to send request to LM Studio despite correct http://127.0.0.1:1234/v1 configuration

Open phsulin2404 opened this issue 3 months ago • 9 comments

Describe the bug

Goose shows the following error message when attempting to chat with a model running locally via LM Studio:

Request failed: error sending request for url (http://localhost/v1)

However, LM Studio is running correctly, the server is active at http://127.0.0.1:1234, and a request to /v1/models returns the expected model list.

Goose seems to be ignoring or overriding the correct custom provider base URL (http://127.0.0.1:1234/v1) and reverting to http://localhost/v1, which causes the request to fail.


To Reproduce

  1. Start LM Studio, load a model (e.g. tool-star-qwen-7b-i1) and ensure server is running at http://127.0.0.1:1234.

  2. In Goose, go to Settings > Providers > LM Studio, configure with:

    • API Key: <any placeholder>
    • Custom Provider Base URL: http://127.0.0.1:1234/v1
  3. Open Goose > Chat.

  4. Select the LM Studio model.

  5. Type and send a prompt like hi.

  6. Error occurs: Request failed: error sending request for url (http://localhost/v1)


Expected behavior

Goose should respect the explicitly configured CUSTOMPROVIDERBASEURL (http://127.0.0.1:1234/v1) and send requests to it, not override it with http://localhost/v1.

Please provide following information:

  • OS & Arch: Windows 11 x64
  • Interface: UI
  • Version: Goose Chat v1.0.2 (latest as of Aug 25, 2025)
  • Extensions enabled: None
  • Provider & Model: LM Studio - tool-star-qwen-7b-i1

Additional context

The LM Studio local server is confirmed reachable via both http://127.0.0.1:1234/v1/models and http://localhost:1234/v1/models in browser and from cURL.

It appears Goose internally rewrites or defaults localhost incorrectly as http://localhost/v1 — ignoring the custom port 1234.

Please confirm if this is a bug in Goose’s request routing or if there’s an internal fallback mechanism overriding the custom URL.

phsulin2404 avatar Aug 25 '25 15:08 phsulin2404

Should be fixed by https://github.com/block/goose/pull/4312

vlascik avatar Aug 25 '25 18:08 vlascik

thanks!

blakkd avatar Aug 25 '25 19:08 blakkd

@vlascik #4312 fixed port issue but

In crates/goose/src/providers/openai.rs force use OPEN_AI_DEFAULT_FAST_MODEL. If local llm not support "gpt-4o-mini"; it won't run Error

Context: ○○○○○○○○○○ 0% (0/128000 tokens)
( O)> hello
  2025-08-26T15:54:52.211210Z  WARN goose::providers::utils: Provider request failed with status: 404 Not Found. Payload: Some(Object {"object": String("error"), "message": String("The model `gpt-4o-mini` does not exist."), "type": String("NotFoundError"), "param": Null, "code": Number(404)}). Returning error: RequestFailed("Request failed with status: 404 Not Found")
    at crates/goose/src/providers/utils.rs:114

  2025-08-26T15:54:52.211299Z ERROR goose::session::storage: Failed to generate session description: Request failed: Request failed with status: 404 Not Found
    at crates/goose/src/session/storage.rs:1304

Session ended with error: Failed to generate session description

I patched as below not sure is it good.

--- a/crates/goose/src/providers/openai.rs
+++ b/crates/goose/src/providers/openai.rs
@@ -60,13 +60,17 @@ impl_provider_default!(OpenAiProvider);

 impl OpenAiProvider {
     pub fn from_env(model: ModelConfig) -> Result<Self> {
-        let model = model.with_fast(OPEN_AI_DEFAULT_FAST_MODEL.to_string());
-
         let config = crate::config::Config::global();
         let api_key: String = config.get_secret("OPENAI_API_KEY")?;
         let host: String = config
             .get_param("OPENAI_HOST")
             .unwrap_or_else(|_| "https://api.openai.com".to_string());
+
+        let model = if host == "https://api.openai.com" {
+            model.with_fast(OPEN_AI_DEFAULT_FAST_MODEL.to_string())
+        } else {
+            model
+        };

shawn111 avatar Aug 26 '25 15:08 shawn111

are you using a custom provider or overriding the settings for the general openai provider?

DOsinga avatar Aug 26 '25 16:08 DOsinga

are you using a custom provider or overriding the settings for the general openai provider?

@DOsinga Yes, I tried use ollama as goose provider for my local ai first, but not support api_key only openai did. And in goose-cli configure, OpenAI shows as "OpenAI (GPT-4 and other OpenAI models, including OpenAI compatible ones)" It is why I thought maybe I need use OpenAI as provider. But seem OpenAI provider is designed for OpenAI only now.

shawn111 avatar Aug 27 '25 00:08 shawn111

Hi, I’m running into the same problem. I’ve configured Goose with an OpenAI-compatible provider pointing to LM Studio at http://127.0.0.1:1234/v1. I set:

Display Name: LM Studio

API URL: http://127.0.0.1:1234/v1

API Key: no-key-required

Available Models: dolphin-2.9.4-llama3.1-8b

I’m using Chat Only mode. LM Studio logs show that Goose is still posting to /v1 instead of /v1/chat/completions, which results in Bad Request 400 responses.

I’ve confirmed LM Studio does support /v1/chat/completions and tested it successfully with a Python request including max_tokens and temperature. This indicates the endpoint is working, but Goose is not including the required parameters or hitting the correct path.

It looks like Goose doesn’t currently provide a way to inject required chat parameters (max_tokens, temperature) into requests. Is there a recommended way to configure these, or do I need to use a local proxy to modify the request?

RexC2I avatar Sep 25 '25 17:09 RexC2I

if it is chat only - should be able to sort it out, but in the meantime maybe a proxy (you can use a custom provider I think to over ride). @RexC2I are you using openai provider or customer provider openai style (just so we can reproduce).

michaelneale avatar Oct 13 '25 07:10 michaelneale

I am using a custom OpenAI provider, and for Lemonade I have to use the endpoint /api/v1/chat/completions. When it posts to the Lemonade server, it disregards the /api as described above.

BabyDino avatar Oct 13 '25 07:10 BabyDino

so in general, often providers describe that their api lives at /v1, but for goose, you have to specify the actual chat endpoint, i.e. being /v1/chat/completions

DOsinga avatar Oct 31 '25 19:10 DOsinga

I'm going to close this for now. Use the right actual API end point and this should work

DOsinga avatar Nov 06 '25 18:11 DOsinga