ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

ollama version 0.9.3 with latest docker.io/intelanalytics/ipex-llm-inference-cpp-xpu:latest Docker Image

Open Railsimulatornet opened this issue 4 months ago • 3 comments

Describe the bug A clear and concise description of what the bug or error is.

How to reproduce Steps to reproduce the error:

  1. Install Intel LLM with this Compose services: intel-llm: image: docker.io/intelanalytics/ipex-llm-inference-cpp-xpu:latest container_name: intel-llm devices:
    • /dev/dri volumes:
    • /volume2/docker/chatgpt/models:/root/.ollama/models environment:
    • no_proxy=localhost,127.0.0.1
    • OLLAMA_HOST=0.0.0.0
    • DEVICE=iGPU
    • OLLAMA_INTEL_GPU=true
    • HOSTNAME=intel-llm
    • OLLAMA_NUM_GPU=999
    • ZES_ENABLE_SYSMAN=1 restart: unless-stopped command: sh -c 'mkdir -p /llm/ollama && cd /llm/ollama && init-ollama && exec ./ollama serve'

openwebui: image: ghcr.io/open-webui/open-webui:main container_name: openwebui volumes: - /volume2/docker/chatgpt/webui:/app/backend/data ports: - "3000:8080" environment: - OLLAMA_BASE_URL=http://intel-llm:11434 restart: unless-stopped

volumes: models: open-webui: 2. Download gpt-oss Modell 3.In Terminal ./ollama -v 4. ollama version is 0.9.3

Screenshots If applicable, add screenshots to help explain the problem

Environment information If possible, please attach the output of the environment check script, using:

Additional context How can i update ollama to a actual version?* Thanks for help.

Railsimulatornet avatar Aug 17 '25 12:08 Railsimulatornet

No actually you cannot update yourself, just wait for ipex-llm developing.

Ellie-Williams-007 avatar Aug 19 '25 03:08 Ellie-Williams-007

Looks like there is NO DEVELOPMENT anymore...the ipex-llm-inference-cpp-xpu:latest Docker Image is a few month old now and has a very old Ollama version included (0.9.3?). It is NOT possible to run the latest ChatGPT OSS Models with this old Ollama version. What a shame. I regret buying a Intel NUC 15 Pro so much.....it is not possible to build it on my own with a working new Ollama version and GPU passtrough on my Proxmox. I need the docker version!! Developers WAKE UP!! Please....

Dosperado74 avatar Nov 06 '25 16:11 Dosperado74

Looks like there is NO DEVELOPMENT anymore...the ipex-llm-inference-cpp-xpu:latest Docker Image is a few month old now and has a very old Ollama version included (0.9.3?). It is NOT possible to run the latest ChatGPT OSS Models with this old Ollama version. What a shame. I regret buying a Intel NUC 15 Pro so much.....it is not possible to build it on my own with a working new Ollama version and GPU passtrough on my Proxmox. I need the docker version!! Developers WAKE UP!! Please....

This is because ollama has added this upstream:

https://github.com/ollama/ollama/pull/9650

thiscantbeserious avatar Dec 01 '25 12:12 thiscantbeserious