Ollama branch name generation doesn't work
Version
Version 0.14.4 (20241213.093904)
Operating System
macOS
Distribution Method
dmg (Apple Silicon)
Describe the issue
Hitting "Generate branch name" with the Ollama backend configured (default settings – endpoint http://127.0.0.1:11434, model llama3) results in the error "Failed to generate branch name: The string did not match the expected pattern".
I do not really understand the network requests I'm seeing in devtools, but there's a request to plugin:http|fetch_send with a strange-looking response:
{
"status": 403,
"statusText": "Forbidden",
"headers": [
[
"date",
"Fri, 20 Dec 2024 11:31:26 GMT"
],
[
"content-length",
"0"
]
],
"url": "http://127.0.0.1:11434/api/chat",
"rid": 3377205724
}
I've checked that Ollama is actually installed and running:
$ http POST http://127.0.0.1:11434/api/chat model=llama3
HTTP/1.1 200 OK
Content-Length: 137
Content-Type: application/json; charset=utf-8
Date: Fri, 20 Dec 2024 11:31:59 GMT
{
"created_at": "2024-12-20T11:31:59.41741Z",
"done": true,
"done_reason": "load",
"message": {
"content": "",
"role": "assistant"
},
"model": "llama3"
}
How to reproduce
Configure GitButler to use Ollama, and try to generate a branch name.
Expected behavior
Ollama should be able to generate a branch name (like e.g. OpenAI).
Relevant log output
No response
Hi! I believe you might be running into an issue related to our app permissions. Tauri has strong sandboxing around what the frontend can access.
Could you try 0.0.0.0 or localhost because I believe we have those whitelist.
localhost gives the same error; 0.0.0.0 produces a different error: "url not allowed on the configured scope: http://0.0.0.0:11434/api/chat"
Sorry for the trouble! I'll spin up ollama and try it out this afternoon.
I've had a go running it, and it seems to work in development mode 😬. I'm currently on holiday and debugging differences between development and production builds does not sound super enjoyable, so I'm going to put this onto my todolist for new year.
Sorry for any inconvenience.
+1 Experiencing the same error
+1 same error :)
error + 1
Auto-closed when I merged the likely fix, so re-opening until it has been verified.
I'll have a go now :+1:
By "now" I mean, in 30 minutes time when a nightly build has finished chugging away
I'm getting these strange "string did not match the expected pattern" errors 🤔
I made a mistake in the previous pr: https://github.com/gitbutlerapp/gitbutler/pull/5885
Didn't work for me
Can you try pointing gitbutler at localhost or 127.0.0.1?
Can you try pointing gitbutler at
localhostor127.0.0.1?
Argh. Let me properly debug this later today.
Hello! Are there any updates on the issue?
I've been experimenting with the dev build to figure out what's going wrong. I came across this thread, which might be helpful. According to the comments, the bug seems to be somehow related to the explicit specification of the port.
I managed to work around this bug by following these steps:
- Add
127.0.0.1 ollama.localto/etc/hosts. - Configure an Nginx server like this:
server {
listen 80;
server_name ollama.local;
location / {
proxy_pass http://127.0.0.1:11434;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
- Add
http://ollama.localto crates/gitbutler-tauri/capabilities/main.json. - Update the Ollama endpoint from
http://127.0.0.1:11434tohttp://ollama.local.
It works now—well, about half the time. Obviously, this is not a solution, but perhaps it could help you solve the problem.
However, I occasionally encounter errors like this:
Invalid response: { "type": "object", "properties": { "result": "Update go.mod with cloud provider version" }, "required": [ "result" ], "additionalProperties": false }
It seems to me that the Ollama response isn’t always correct and doesn’t fully comply with the OLLAMA_CHAT_MESSAGE_FORMAT_SCHEMA, as there is no result field at the top level. The JSON returned by the LLM appears to be strange and doesn’t entirely make sense. This behavior seems expected, given that LLMs tend to hallucinate, especially smaller models like llama3.2 in my case.
Is there anything you can do to address this? Maybe the prompt could be adjusted to force the model to return plain text instead of JSON?
update: I’ve also managed to reproduce the error: “The string did not match the expected pattern”. This happens because of the following call, as well as inconsistencies in the Ollama output.
Same issue and the above did not help. I did not run from source, so not sure it should work.
+1 Experiencing the same error
+1 Experiencing the same error. I can get it to work in dev mode, but as soon as I do a "nightly-like" build it gives the "string did not match expected pattern error"
I found out that GitButler (Version 0.14.16 (20250408.174627)) sends null origin header on MacOS for Ollama requests:
Returning 403 response,
Unfortunately ollama cannot be set to allow null origin, and you must allow all origins. You can allow all origins using OLLAMA_ORIGINS env variable set to *, like so:
OLLAMA_ORIGINS="*" ollama serve
Would it be possible to send proper origin header for Ollama requests? These are allowed by default,
OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*]
Thank so much for looking into this!
This sounds like a simple adjustment, but @estib-vega would know more about it I am sure.