openui icon indicating copy to clipboard operation
openui copied to clipboard

Model loading problem with Ollama

Open RaemyS opened this issue 1 year ago • 10 comments

Hi there,

I am using openui out of pinokio (https://pinokio.computer/item?uri=https://github.com/pinokiofactory/openui). As a LLM backend, i'm using Ollama in its current version 0.4.6.

If i want to try your tool, i get the error message "Error! 404 Error code: 404 - {'error': {'message': 'model "undefined" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}" after sending a prompt.

As i try to set a different model, i noticed in the settings window, that the select box does not show any model names but only empty entries: Image

It does not matter, which of those entries i choose, the error persists.

If i quit Ollama and try to resolve the installed models, the seleciton is empty: Image

So the model resolution from Ollama seems to work at least partially (6 entries are correct, according to the 6 currently installed models). My guess is, that openui is not able to resolve the information from the Ollama model list request correctly and further on, this leads to the upper error message.

Do you have any ideas, to solve this problem?

Thx :)

RaemyS avatar Dec 01 '24 16:12 RaemyS

Same exact issue here. pre-existing Ollama install.

hashemAlsaggaf avatar Dec 03 '24 08:12 hashemAlsaggaf

Hello, Miss the name property inside the json returned by the lib ollama in python. Even if the ollama endpoint http://localhost:11434/api/tags return
Image. the endpoint on his side remove it http://127.0.0.1:7878/v1/models Image.

Inside the frontend it is used in several places :

  • frontend/src/components/Settings.tsx:109-232-233

It can be fixed by updating the frontend Settings.tsx. A q&d solution is to update the object provided by the backend in server.py (function get_ollama_models)

a PR has been created for the settings

alexandregodard avatar Dec 08 '24 12:12 alexandregodard

Nice, thanks for the fix @alexandregodard . Unfortunately, i'm not skilled enough with the current tech stack, so i'm not able to compile and test the changes locally without further research.

Hope your fix finds its way to the app soon, so i can update the tool and try again :)

RaemyS avatar Dec 08 '24 16:12 RaemyS

Please, how to solve it. I'm having the same issue with ollama:

Image

Fox-Me avatar Dec 20 '24 18:12 Fox-Me

Hello, Miss the name property inside the json returned by the lib ollama in python. Even if the ollama endpoint http://localhost:11434/api/tags return Image. the endpoint on his side remove it http://127.0.0.1:7878/v1/models Image.

Inside the frontend it is used in several places :

  • frontend/src/components/Settings.tsx:109-232-233

It can be fixed by updating the frontend Settings.tsx. A q&d solution is to update the object provided by the backend in server.py (function get_ollama_models)

a PR has been created for the settings

Hello, I have made the same changes on my side, but I still can't see the name of the ollama model after running it.

rainchan0227 avatar Dec 24 '24 17:12 rainchan0227

您好,错过了 python 中 lib ollama 返回的 json 中的 name 属性。即使 ollama 端点 http://localhost:11434/api/tags 返回 Image.他一侧的端点将其删除 http://127.0.0.1:7878/v1/models 图像。 在前端内部,它用于多个地方:

  • frontend/src/components/Settings.tsx:109-232-233

可以通过更新前端 Settings.tsx 来修复它。q&d 解决方案是在 server.py 中更新后端提供的对象(函数 get_ollama_models) 已为设置创建 PR

您好,我在我这边做了同样的更改,但是运行后仍然看不到 ollama 模型的名称。

而且我改了前端代码后运行 python -m openui 好像还不生效

ryt66 avatar Dec 28 '24 09:12 ryt66

有任何使用问题记得找我,兄弟们~ 当前问题已解决: https://github.com/wandb/openui/issues/221

stone100010 avatar Feb 07 '25 08:02 stone100010

in server.py file in backend, refactor @router.get("/v1/models", tags="openui/models") like below


from ollama._types import SubscriptableBaseModel, ModelDetails
from typing import Optional, Sequence
from pydantic import (
  ByteSize,
  ConfigDict,
)

# class ListResponse_new(SubscriptableBaseModel):
class Model_new(SubscriptableBaseModel):
    model: Optional[str] = None
    name: Optional[str] = None
    modified_at: Optional[datetime] = None
    digest: Optional[str] = None
    size: Optional[ByteSize] = None
    details: Optional[ModelDetails] = None

#   models: Sequence[Model]
#   'List of models.'


@router.get("/v1/models", tags="openui/models")
async def models():
    tasks = [
        get_openai_models(),
        get_groq_models(),
        get_ollama_models(),
        get_litellm_models(),
    ]
    openai_models, groq_models, ollama_models, litellm_models = await asyncio.gather(
        *tasks
    )

    # final_ollama = []
    
    # Lấy ra dictionary dữ liệu
    # data = ollama_models.model_dump()   # hoặc dùng model_dump() nếu dùng pydantic v2

    # Tạo instance mới từ dictionary đó
    # new_instance = ListResponse_new(**data)
    
    final_ollama = []
    
    for i in ollama_models:
        # print(type(i))
        # # print(i)
        # print(i.model_extra)
        data = i.model_dump()
        new_class = Model_new(**data)
        new_class.name = new_class.model
        final_ollama.append(new_class)
        
        # i.model_config = ConfigDict(extra='allow')
        # setattr(i, "name", getattr(i, "model"))
        # i.name = i.model
    return {
        "models": {
            "openai": openai_models,
            "groq": groq_models,
            "ollama": final_ollama,
            "litellm": litellm_models,
        }
    }

tritam593 avatar Feb 16 '25 08:02 tritam593

in server.py file in backend, refactor @router.get("/v1/models", tags="openui/models") like below

from ollama._types import SubscriptableBaseModel, ModelDetails from typing import Optional, Sequence from pydantic import ( ByteSize, ConfigDict, )

class ListResponse_new(SubscriptableBaseModel):

class Model_new(SubscriptableBaseModel): model: Optional[str] = None name: Optional[str] = None modified_at: Optional[datetime] = None digest: Optional[str] = None size: Optional[ByteSize] = None details: Optional[ModelDetails] = None

models: Sequence[Model]

'List of models.'

@router.get("/v1/models", tags="openui/models") async def models(): tasks = [ get_openai_models(), get_groq_models(), get_ollama_models(), get_litellm_models(), ] openai_models, groq_models, ollama_models, litellm_models = await asyncio.gather( *tasks )

# final_ollama = []

# Lấy ra dictionary dữ liệu
# data = ollama_models.model_dump()   # hoặc dùng model_dump() nếu dùng pydantic v2

# Tạo instance mới từ dictionary đó
# new_instance = ListResponse_new(**data)

final_ollama = []

for i in ollama_models:
    # print(type(i))
    # # print(i)
    # print(i.model_extra)
    data = i.model_dump()
    new_class = Model_new(**data)
    new_class.name = new_class.model
    final_ollama.append(new_class)
    
    # i.model_config = ConfigDict(extra='allow')
    # setattr(i, "name", getattr(i, "model"))
    # i.name = i.model
return {
    "models": {
        "openai": openai_models,
        "groq": groq_models,
        "ollama": final_ollama,
        "litellm": litellm_models,
    }
}

Checked this refactor. It does not work. When printing data from FOR loop, following is output:

<class 'ollama._types.ListResponse.Model'> None <class 'ollama._types.ListResponse.Model'> None <class 'ollama._types.ListResponse.Model'> None <class 'ollama._types.ListResponse.Model'> None <class 'ollama._types.ListResponse.Model'> None

eragrahariamit avatar Feb 21 '25 06:02 eragrahariamit

This issue still exists.

Image

Ollama is running fine as it shows up when visiting http://localhost:11434/

Relevant docker compose files to recreate the issue. Note: They are seprate files.

openui

services:
  openui:
    image: ghcr.io/wandb/openui
    environment:
      - OLLAMA_HOST=http://host.docker.internal:11434
    container_name: openui
    ports:
      - "7878:7878"

ollama

services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    restart: unless-stopped
    ports:
      - "11434:11434"
    volumes:
      - ./root/.ollama:/root/.ollama

Any help is appreciated.

RashiqAzhan avatar Nov 19 '25 18:11 RashiqAzhan