ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

Ollama fails to load model to a380 GPU

Open GamerSocke opened this issue 1 year ago • 4 comments

Hello,

i have got a issue while runnning ollama on the A380 gpu.

This log snippet is from the ollama log while executing a prompt from open-webui.

The system runs: Fedora 39 Kernel 6.10.7 The GPU has resizable bar enabled

ollama and open-webui are running in two separate docker containers. I have tried multiple models, smaller ones, bigger ones, always the same error.

Any recommodations on resolving this?

time=2024-10-10T06:16:56.429+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=25 memory.available="18.4 GiB" memory.required.full="706.6 MiB" memory.required.partial="706.6 MiB" memory.required.kv="24.0 MiB" memory.weights.total="373.7 MiB" memory.weights.repeating="235.8 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB"
time=2024-10-10T06:16:56.429+08:00 level=INFO source=server.go:342 msg="starting llama server" cmd="/tmp/ollama2656666813/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 999 --parallel 1 --port 34029"
time=2024-10-10T06:16:56.429+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-10-10T06:16:56.429+08:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-10-10T06:16:56.429+08:00 level=INFO source=server.go:566 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="f6b084d" tid="140384432765952" timestamp=1728512216
INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140384432765952" timestamp=1728512216 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="34029" tid="140384432765952" timestamp=1728512216
llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 0.5B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 0.5B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-0...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 0.5B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-0.5B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 24
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q5_0:  132 tensors
llama_model_loader: - type q8_0:   13 tensors
llama_model_loader: - type q4_K:   12 tensors
llama_model_loader: - type q6_K:   12 tensors
llm_load_vocab: special tokens definition check successful ( 293/151936 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 896
llm_load_print_meta: n_head           = 14
llm_load_print_meta: n_head_kv        = 2
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 128
llm_load_print_meta: n_embd_v_gqa     = 128
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 4864
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 1B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 494.03 M
llm_load_print_meta: model size       = 373.71 MiB (6.35 BPW) 
llm_load_print_meta: general.name     = Qwen2.5 0.5B Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
[SYCL] call ggml_init_sycl
ggml_init_sycl: GGML_SYCL_DEBUG: 1
ggml_init_sycl: GGML_SYCL_F16: no
time=2024-10-10T06:16:56.680+08:00 level=INFO source=server.go:566 msg="waiting for server to become available" status="llm server loading model"
[SYCL] call ggml_backend_sycl_print_sycl_devices
found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                Intel Arc A380 Graphics|    1.3|    128|    1024|   32|  6064M|            1.3.30049|
[SYCL] call ggml_backend_sycl_set_mul_device_mode
ggml_backend_sycl_set_mul_device_mode: true
detect 1 SYCL GPUs: [0] with top Max compute units:128
[SYCL] call ggml_backend_sycl_host_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_get_device_memory
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
llm_load_tensors: ggml ctx size =    0.28 MiB
[SYCL] call ggml_backend_sycl_host_buffer_type
[SYCL] call ggml_backend_sycl_host_buffer_type
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors:      SYCL0 buffer size =   373.73 MiB
llm_load_tensors:        CPU buffer size =   137.94 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_init
Using device 0 (Intel(R) Arc(TM) A380 Graphics) as main device
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_get_device_id
llama_kv_cache_init:      SYCL0 KV buffer size =    24.00 MiB
llama_new_context_with_model: KV self size  =   24.00 MiB, K (f16):   12.00 MiB, V (f16):   12.00 MiB
[SYCL] call ggml_backend_sycl_host_buffer_type
llama_new_context_with_model:  SYCL_Host  output buffer size =     0.58 MiB
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_host_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[1728512217] warming up the model with an empty run
llama_new_context_with_model:      SYCL0 compute buffer size =   298.50 MiB
llama_new_context_with_model:  SYCL_Host compute buffer size =     5.76 MiB
llama_new_context_with_model: graph nodes  = 870
llama_new_context_with_model: graph splits = 2
[SYCL] call ggml_backend_sycl_buffer_type
call ggml_sycl_rms_norm
The program was built for 1 devices
Build program log for 'Intel(R) Arc(TM) A380 Graphics':
 -999 (Unknown PI error)Exception caught at file:/home/runner/_work/llm.cpp/llm.cpp/ollama-internal/llm/llama.cpp/ggml-sycl.cpp, line:14714
time=2024-10-10T06:16:57.683+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 1 "

GamerSocke avatar Oct 09 '24 22:10 GamerSocke

Hi @GamerSocke , we are reproducing your issue, we will inform you once we get a solution.

sgwhat avatar Oct 11 '24 01:10 sgwhat

@GamerSocke We notice you are using Fedora 39 and Kernel 6.10.7, we only support ubuntu 22.04 and linux kernel 6.2 and 6.5.

qiuxin2012 avatar Oct 11 '24 02:10 qiuxin2012

@GamerSocke Use OLLAMA_NUM_PARALLEL=1 in your script just before ollama serve

I have also an Intel ARC A380 and it's the only way to load a model in 6GB VRAM

I use Ubuntu 24.04 with Linux Kernel 6.8

NikosDi avatar Nov 30 '24 05:11 NikosDi

having similar issue on unraid

uname -r
6.6.78-Unraid

deepseek-r1:7b:

llama_init_from_model: n_ctx_per_seq (16384) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
Running with Environment Variables:
  GGML_SYCL_DEBUG: 0
  GGML_SYCL_DISABLE_OPT: 1
Build with Macros:
  GGML_SYCL_FORCE_MMQ: no
  GGML_SYCL_F16: no
Found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                Intel Arc A380 Graphics|  12.56|    128|    1024|   32|  6064M|     1.6.32224.500000|
SYCL Optimization Feature:
|ID|        Device Type|Reorder|
|--|-------------------|-------|
| 0| [level_zero:gpu:0]|      Y|
llama_kv_cache_init: kv_size = 16384, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
llama_kv_cache_init:        CPU KV buffer size =    64.00 MiB
llama_kv_cache_init:      SYCL0 KV buffer size =   832.00 MiB
llama_init_from_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_init_from_model:        CPU  output buffer size =     0.59 MiB
llama_init_from_model:      SYCL0 compute buffer size =   956.02 MiB
llama_init_from_model:  SYCL_Host compute buffer size =   937.01 MiB
llama_init_from_model: graph nodes  = 986
llama_init_from_model: graph splits = 28 (with bs=512), 3 (with bs=1)
time=2025-04-29T12:57:19.888+01:00 level=WARN source=runner.go:799 msg="%s: warming up the model with an empty run - please wait ... " !BADKEY=loadModel
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
ollama-lib: ggml-cpu.c:9128: ggml_compute_forward_soft_max_f32: Assertion `sum > 0.0' failed.
time=2025-04-29T12:57:20.258+01:00 level=INFO source=server.go:635 msg="waiting for server to become available" status="llm server error"
time=2025-04-29T12:57:21.012+01:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: signal: aborted"
[GIN] 2025/04/29 - 12:57:21 | 500 | 36.801884215s |      172.20.0.9 | POST     "/api/chat"
time=2025-04-29T12:57:26.064+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.051317491 model=/root/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
time=2025-04-29T12:57:26.314+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.301015328 model=/root/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
time=2025-04-29T12:57:26.564+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.55130545 model=/root/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49

but this model was working with previous version 2.2.0

my docker compose:

ollama-intel-gpu:
    container_name: ollama-intel-gpu
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - $APP_DATA/ollama/intel:/root/.ollama
    environment:
      - ONEAPI_DEVICE_SELECTOR=level_zero:0
      - OLLAMA_INTEL_GPU=true
      - OLLAMA_MAX_LOADED_MODELS=1
      - OLLAMA_NUM_PARALLEL=1
      - OLLAMA_KEEP_ALIVE=-1
      - IPEX_LLM_NUM_CTX=16384

and dockerfile:

FROM intelanalytics/ipex-llm-inference-cpp-xpu:latest
ENV ZES_ENABLE_SYSMAN=1
ENV USE_XETLA=OFF
ENV OLLAMA_HOST=0.0.0.0:11434
RUN mkdir -p /llm/ollama && \
    cd /llm/ollama && \
    init-ollama
WORKDIR /llm/ollama
ENTRYPOINT ["./ollama", "serve"]

NooNameR avatar Apr 29 '25 12:04 NooNameR