continue icon indicating copy to clipboard operation
continue copied to clipboard

Error calling Ollama /api/show endpoint: Error: Error: HTTP 404 Not Found from http://127.0.0.1:11434/api/show

Open Wilson-fan8877 opened this issue 9 months ago • 1 comments

Before submitting your bug report

Relevant environment info

- OS:ubuntu 22.04
- Continue:
- IDE:vscode

Description

when use ollama LLM , why this extension call http://127.0.0.1:11434/api/show, ollama not have this endpoint

error messages at dev tools: Error calling Ollama /api/show endpoint: Error: Error: HTTP 404 Not Found from http://127.0.0.1:11434/api/show

To reproduce

No response

Log output

No response

Wilson-fan8877 avatar May 14 '24 17:05 Wilson-fan8877

@Wilson-fan8877 Ollama does have this endpoint, at least in their documentation: https://github.com/ollama/ollama/blob/main/docs/api.md#show-model-information

Could you share what version of Ollama you are using?

One other way to test this would by try the following curl request:

curl http://localhost:11434/api/show -d '{
  "name": "llama3"
}'

sestinj avatar May 14 '24 23:05 sestinj

Has the problem been solved?

homeant avatar Jun 15 '24 01:06 homeant

This is still a problem and thwarts onboarding for people using ollama.

trinque avatar Jun 15 '24 21:06 trinque

@trinque @homeant Are you both on the latest version of Ollama? Can you share more details about your setup?

It's pretty clear from their docs that this endpoint should exist, so I'm left wondering what difference in environment might be causing this. Any extra info will help me solve the problem more quickly

sestinj avatar Jun 17 '24 02:06 sestinj

I am facing the same issue in both vscode and intellij. It was working before a month or so, I came back to code after a month and it's broken with the mentioned error message. My tab autocomplete config:

"tabAutocompleteModel": {
    "title": "codegemma:7b-code",
    "provider": "ollama",
    "model": "codegemma:7b-code"
  },
  "tabAutocompleteOptions": {
    "useCopyBuffer": true,
    "maxPromptTokens": 400,
    "prefixPercentage": 0.5
  }

Model config:

{
      "title": "codegemma:7b-code",
      "model": "codegemma:7b-code",
      "contextLength": 2048,
      "completionOptions": {},
      "apiBase": "http://localhost:11434",
      "provider": "ollama"
    }
curl http://localhost:11434/api/show -d '{ 
  "name": "codegemma:7b-code"
}'

shows information about the model successfully.

eklavya avatar Jun 22 '24 11:06 eklavya

same for me image

Kellenok avatar Jun 23 '24 20:06 Kellenok

I finally figured out that this is happening when requesting details about a model that is in an outdated format. So unless you manually go through and delete older models, the response will be the same from Ollama. For this reason we'll be ignoring it in all future releases. Next one for JetBrains should be early this week and for VS Code it's already released in pre-release

sestinj avatar Jun 30 '24 22:06 sestinj