ollama icon indicating copy to clipboard operation
ollama copied to clipboard

Unable to load dynamic library error when using container

Open otavio-silva opened this issue 2 years ago • 14 comments

Description

When trying to run a model using the container, it gives the an error about loading a dynamic library. Ollama is able to list the available models but not run them. The container can see the GPU as nvidia-smi gives the expected output.

Current output

Error: Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama946395612/cpu_avx2/libext_server.so: undefined symbol: _ZTVN10__cxxabiv117__c

Expected output

To the model to run correctly.

Steps to reproduce

  1. Run the command podman run --device nvidia.com/gpu=all --security-opt label=disable --detach --volume .ollama:/root/.ollama -p 11434:11434 --name ollama-20 ollama/ollama:0.1.20
  2. Run the command podman exec -it ollama-20 ollama run llama2
  3. See error

System info

Nome do host:                              GE76RAIDER
Nome do sistema operacional:               Microsoft Windows 11 Pro
Versão do sistema operacional:             10.0.22631 N/A compilação 22631
Fabricante do sistema operacional:         Microsoft Corporation
Configuração do SO:                        Estação de trabalho autônoma
Tipo de compilação do sistema operacional: Multiprocessor Free
Proprietário registrado:                   [email protected]
Organização registrada:                    N/A
Identificação do produto:                  00330-80000-00000-AA520
Data da instalação original:               02/08/2023, 14:30:14
Tempo de Inicialização do Sistema:         10/01/2024, 12:32:44
Fabricante do sistema:                     Micro-Star International Co., Ltd.
Modelo do sistema:                         Raider GE76 12UHS
Tipo de sistema:                           x64-based PC
Processador(es):                           1 processador(es) instalado(s).
                                           [01]: Intel64 Family 6 Model 154 Stepping 3 GenuineIntel ~2900 Mhz
Versão do BIOS:                            American Megatrends International, LLC. E17K4IMS.20D, 26/06/2023
Pasta do Windows:                          C:\WINDOWS
Pasta do sistema:                          C:\WINDOWS\system32
Inicializar dispositivo:                   \Device\HarddiskVolume1
Localidade do sistema:                     pt-br;Português (Brasil)
Localidade de entrada:                     en-us;Inglês (Estados Unidos)
Fuso horário:                              (UTC-03:00) Brasília
Memória física total:                      65.237 MB
Memória física disponível:                 44.469 MB
Memória Virtual: Tamanho Máximo:           74.965 MB
Memória Virtual: Disponível:               47.017 MB
Memória Virtual: Em Uso:                   27.948 MB
Local(is) de arquivo de paginação:         C:\pagefile.sys
Domínio:                                   WORKGROUP
Servidor de Logon:                         \\GE76RAIDER
Hotfix(es):                                4 hotfix(es) instalado(s).
                                           [01]: KB5033920
                                           [02]: KB5027397
                                           [03]: KB5034123
                                           [04]: KB5032393
Placa(s) de Rede:                          3 NIC(s) instalado(s).
                                           [01]: Killer E3100G 2.5 Gigabit Ethernet Controller
                                                 Nome da conexão: Ethernet
                                                 Status:          Mídia desconectada
                                           [02]: Killer(R) Wi-Fi 6E AX1675i 160MHz Wireless Network Adapter (211NGW)
                                                 Nome da conexão: Wi-Fi
                                                 DHCP ativado:    Sim
                                                 Servidor DHCP:   192.168.1.1
                                                 Endereço(es) IP
                                                 [01]: 192.168.1.26
                                           [03]: TAP-Windows Adapter V9
                                                 Nome da conexão: TAP-Windows
                                                 Status:          Mídia desconectada
Requisitos do Hyper-V:                     Hipervisor detectado. Recursos necessários para o Hyper-V não serão exibidos.


otavio-silva avatar Jan 12 '24 00:01 otavio-silva

Sorry you hit this error! Would it be possible to run docker pull ollama/ollama or docker pull ollama/ollama:0.1.20 based on the image you have? It seems some new CPU instruction detection features were added to 0.1.20 when it was published, even though they are slated for the next one (sorry about that). The docker image was just corrected and it should not have this error. Keep me posted if that fixes it!

jmorganca avatar Jan 12 '24 00:01 jmorganca

v0.1.20 fixed this for me. Insane fast fix, thank you!

MarvinJWendt avatar Jan 12 '24 00:01 MarvinJWendt

@jmorganca thank you for the incredibly fast response! Just pulled the most recent 0.1.20 image, it works as intended. But is not using the GPU, even though nvidia-smi gives the expected output.

otavio-silva avatar Jan 12 '24 00:01 otavio-silva

@otavio-silva do you have the logs handy? Right after Ollama starts, it should print it's status on CUDA detection in the logs. You can find them by running:

journalctl --no-pager -u ollama 

There should be a section like this:

2024/01/12 00:45:33 gpu.go:88: Detecting GPU type
2024/01/12 00:45:33 gpu.go:208: Searching for GPU management library libnvidia-ml.so
2024/01/12 00:45:33 gpu.go:253: Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]
2024/01/12 00:45:35 gpu.go:94: Nvidia GPU detected
2024/01/12 00:45:35 gpu.go:135: CUDA Compute Capability detected: 8.9

Thanks so much and sorry it isn't working yet for you

jmorganca avatar Jan 12 '24 01:01 jmorganca

@jmorganca found the logs, this is the output:

2024/01/12 00:51:25 images.go:808: total blobs: 31
2024/01/12 00:51:26 images.go:815: total unused blobs removed: 0
2024/01/12 00:51:26 routes.go:930: Listening on [::]:11434 (version 0.1.20)
2024/01/12 00:51:26 shim_ext_server.go:142: Dynamic LLM variants [cuda]
2024/01/12 00:51:26 gpu.go:88: Detecting GPU type
2024/01/12 00:51:26 gpu.go:203: Searching for GPU management library libnvidia-ml.so
2024/01/12 00:51:26 gpu.go:248: Discovered GPU libraries: []
2024/01/12 00:51:26 gpu.go:203: Searching for GPU management library librocm_smi64.so
2024/01/12 00:51:26 gpu.go:248: Discovered GPU libraries: []
2024/01/12 00:51:26 routes.go:953: no GPU detected
[GIN] 2024/01/12 - 00:51:32 | 200 |      21.948µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/01/12 - 00:51:32 | 200 |   13.927135ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/01/12 - 00:51:32 | 200 |   11.325871ms |       127.0.0.1 | POST     "/api/show"
2024/01/12 00:51:48 llm.go:71: GPU not available, falling back to CPU
2024/01/12 00:51:48 ext_server_common.go:136: Initializing internal llama server
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2 (version GGUF V2)
llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  4096, 32000,     1,     1 ]
llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor    2:            blk.0.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor    3:            blk.0.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor    4:              blk.0.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor    5:            blk.0.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor    6:              blk.0.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    7:         blk.0.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    8:              blk.0.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    9:              blk.0.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   10:           blk.1.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   11:            blk.1.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   12:            blk.1.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   13:              blk.1.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   14:            blk.1.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   15:              blk.1.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   16:         blk.1.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   17:              blk.1.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   18:              blk.1.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   19:          blk.10.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   20:           blk.10.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   21:           blk.10.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   22:             blk.10.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   23:           blk.10.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   24:             blk.10.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   25:        blk.10.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   26:             blk.10.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   27:             blk.10.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   28:          blk.11.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   29:           blk.11.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   30:           blk.11.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   31:             blk.11.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   32:           blk.11.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   33:             blk.11.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   34:        blk.11.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   35:             blk.11.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   36:             blk.11.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   37:          blk.12.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   38:           blk.12.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   39:           blk.12.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   40:             blk.12.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   41:           blk.12.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   42:             blk.12.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   43:        blk.12.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   44:             blk.12.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   45:             blk.12.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   46:          blk.13.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   47:           blk.13.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   48:           blk.13.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   49:             blk.13.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   50:           blk.13.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   51:             blk.13.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   52:        blk.13.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   53:             blk.13.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   54:             blk.13.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   55:          blk.14.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   56:           blk.14.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   57:           blk.14.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   58:             blk.14.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   59:           blk.14.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   60:             blk.14.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   61:        blk.14.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   62:             blk.14.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   63:             blk.14.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   64:          blk.15.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   65:           blk.15.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   66:           blk.15.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   67:             blk.15.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   68:           blk.15.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   69:             blk.15.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   70:        blk.15.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   71:             blk.15.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   72:             blk.15.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   73:          blk.16.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   74:           blk.16.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   75:           blk.16.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   76:             blk.16.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   77:           blk.16.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   78:             blk.16.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   79:        blk.16.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   80:             blk.16.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   81:             blk.16.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   82:          blk.17.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   83:           blk.17.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   84:           blk.17.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   85:             blk.17.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   86:           blk.17.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   87:             blk.17.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   88:        blk.17.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   89:             blk.17.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   90:             blk.17.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   91:          blk.18.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   92:           blk.18.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   93:           blk.18.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   94:             blk.18.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   95:           blk.18.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   96:             blk.18.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   97:        blk.18.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   98:             blk.18.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   99:             blk.18.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  100:          blk.19.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  101:           blk.19.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  102:           blk.19.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  103:             blk.19.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  104:           blk.19.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  105:             blk.19.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  106:        blk.19.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  107:             blk.19.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  108:             blk.19.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  109:           blk.2.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  110:            blk.2.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  111:            blk.2.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  112:              blk.2.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  113:            blk.2.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  114:              blk.2.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  115:         blk.2.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  116:              blk.2.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  117:              blk.2.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  118:          blk.20.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  119:           blk.20.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  120:           blk.20.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  121:             blk.20.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  122:           blk.20.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  123:             blk.20.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  124:        blk.20.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  125:             blk.20.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  126:             blk.20.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  127:          blk.21.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  128:           blk.21.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  129:           blk.21.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  130:             blk.21.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  131:           blk.21.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  132:             blk.21.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  133:        blk.21.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  134:             blk.21.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  135:             blk.21.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  136:          blk.22.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  137:           blk.22.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  138:           blk.22.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  139:             blk.22.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  140:           blk.22.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  141:             blk.22.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  142:        blk.22.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  143:             blk.22.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  144:             blk.22.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  145:          blk.23.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  146:           blk.23.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  147:           blk.23.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  148:             blk.23.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  149:           blk.23.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  150:             blk.23.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  151:        blk.23.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  152:             blk.23.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  153:             blk.23.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  154:           blk.3.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  155:            blk.3.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  156:            blk.3.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  157:              blk.3.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  158:            blk.3.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  159:              blk.3.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  160:         blk.3.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  161:              blk.3.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  162:              blk.3.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  163:           blk.4.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  164:            blk.4.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  165:            blk.4.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  166:              blk.4.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  167:            blk.4.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  168:              blk.4.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  169:         blk.4.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  170:              blk.4.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  171:              blk.4.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  172:           blk.5.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  173:            blk.5.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  174:            blk.5.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  175:              blk.5.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  176:            blk.5.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  177:              blk.5.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  178:         blk.5.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  179:              blk.5.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  180:              blk.5.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  181:           blk.6.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  182:            blk.6.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  183:            blk.6.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  184:              blk.6.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  185:            blk.6.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  186:              blk.6.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  187:         blk.6.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  188:              blk.6.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  189:              blk.6.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  190:           blk.7.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  191:            blk.7.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  192:            blk.7.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  193:              blk.7.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  194:            blk.7.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  195:              blk.7.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  196:         blk.7.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  197:              blk.7.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  198:              blk.7.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  199:           blk.8.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  200:            blk.8.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  201:            blk.8.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  202:              blk.8.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  203:            blk.8.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  204:              blk.8.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  205:         blk.8.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  206:              blk.8.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  207:              blk.8.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  208:           blk.9.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  209:            blk.9.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  210:            blk.9.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  211:              blk.9.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  212:            blk.9.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  213:              blk.9.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  214:         blk.9.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  215:              blk.9.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  216:              blk.9.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  217:                    output.weight q6_K     [  4096, 32000,     1,     1 ]
llama_model_loader: - tensor  218:          blk.24.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  219:           blk.24.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  220:           blk.24.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  221:             blk.24.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  222:           blk.24.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  223:             blk.24.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  224:        blk.24.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  225:             blk.24.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  226:             blk.24.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  227:          blk.25.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  228:           blk.25.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  229:           blk.25.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  230:             blk.25.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  231:           blk.25.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  232:             blk.25.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  233:        blk.25.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  234:             blk.25.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  235:             blk.25.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  236:          blk.26.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  237:           blk.26.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  238:           blk.26.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  239:             blk.26.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  240:           blk.26.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  241:             blk.26.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  242:        blk.26.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  243:             blk.26.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  244:             blk.26.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  245:          blk.27.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  246:           blk.27.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  247:           blk.27.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  248:             blk.27.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  249:           blk.27.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  250:             blk.27.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  251:        blk.27.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  252:             blk.27.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  253:             blk.27.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  254:          blk.28.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  255:           blk.28.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  256:           blk.28.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  257:             blk.28.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  258:           blk.28.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  259:             blk.28.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  260:        blk.28.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  261:             blk.28.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  262:             blk.28.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  263:          blk.29.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  264:           blk.29.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  265:           blk.29.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  266:             blk.29.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  267:           blk.29.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  268:             blk.29.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  269:        blk.29.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  270:             blk.29.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  271:             blk.29.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  272:          blk.30.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  273:           blk.30.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  274:           blk.30.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  275:             blk.30.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  276:           blk.30.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  277:             blk.30.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  278:        blk.30.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  279:             blk.30.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  280:             blk.30.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  281:          blk.31.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  282:           blk.31.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  283:           blk.31.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  284:             blk.31.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  285:           blk.31.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  286:             blk.31.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  287:        blk.31.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  288:             blk.31.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  289:             blk.31.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  290:               output_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors: mem required  = 3647.98 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size  = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 291.19 MiB
2024/01/12 00:53:21 ext_server_common.go:144: Starting internal llama main loop
[GIN] 2024/01/12 - 00:53:21 | 200 |         1m49s |       127.0.0.1 | POST     "/api/generate"
2024/01/12 00:53:46 ext_server_common.go:158: loaded 0 images
[GIN] 2024/01/12 - 00:55:18 | 200 |         1m32s |       127.0.0.1 | POST     "/api/generate"

It seems that it's not detecting the GPU libraries?

otavio-silva avatar Jan 12 '24 01:01 otavio-silva

Seems like it! Would it be possible to run:

find / -name 'libnvidia-ml.so*' 2>/dev/null

To see where they might be on your system? That would help us pick them up in paths Ollama doesn't expect yet. Thanks so much!

jmorganca avatar Jan 12 '24 01:01 jmorganca

@jmorganca since I'm running in a container in Windows 11, I don't know how to display that info, but somethings I found out:

  1. Running the command inside the podman machine (a custom Fedora WSL distro) gives the output:
/usr/lib/wsl/lib/libnvidia-ml.so.1
/usr/lib/wsl/drivers/nvmii.inf_amd64_649395c294ad3a68/libnvidia-ml.so.1
  1. Running the comand podman inspect ollama gives the output:
[
     {
          "Id": "e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4",
          "Created": "2024-01-11T21:51:25.715568406-03:00",
          "Path": "/bin/ollama",
          "Args": [
               "serve"
          ],
          "State": {
               "OciVersion": "1.1.0+dev",
               "Status": "running",
               "Running": true,
               "Paused": false,
               "Restarting": false,
               "OOMKilled": false,
               "Dead": false,
               "Pid": 1398,
               "ConmonPid": 1396,
               "ExitCode": 0,
               "Error": "",
               "StartedAt": "2024-01-11T21:51:25.87846855-03:00",
               "FinishedAt": "0001-01-01T00:00:00Z",
               "Health": {
                    "Status": "",
                    "FailingStreak": 0,
                    "Log": null
               },
               "CgroupPath": "/libpod_parent/libpod-e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4",
               "CheckpointedAt": "0001-01-01T00:00:00Z",
               "RestoredAt": "0001-01-01T00:00:00Z"
          },
          "Image": "caef24cbf95b61135d0b57825f56e661786338b09d43a429ab05348f91ddb982",
          "ImageDigest": "sha256:74b2ac9790e07ff5871398a75eee42b758c7353ecc6579a4108a4b0de9bd78b2",
          "ImageName": "docker.io/ollama/ollama:0.1.20",
          "Rootfs": "",
          "Pod": "",
          "ResolvConfPath": "/run/containers/storage/overlay-containers/e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4/userdata/resolv.conf",
          "HostnamePath": "/run/containers/storage/overlay-containers/e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4/userdata/hostname",
          "HostsPath": "/run/containers/storage/overlay-containers/e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4/userdata/hosts",
          "StaticDir": "/var/lib/containers/storage/overlay-containers/e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4/userdata",
          "OCIConfigPath": "/var/lib/containers/storage/overlay-containers/e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4/userdata/config.json",
          "OCIRuntime": "crun",
          "ConmonPidFile": "/run/containers/storage/overlay-containers/e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4/userdata/conmon.pid",
          "PidFile": "/run/containers/storage/overlay-containers/e77ec25f0ed3c89b59354544a3c3bf7775cf5f64a27c9f20ccc00a70d87478a4/userdata/pidfile",
          "Name": "ollama-20",
          "RestartCount": 0,
          "Driver": "overlay",
          "MountLabel": "",
          "ProcessLabel": "",
          "AppArmorProfile": "",
          "EffectiveCaps": [
               "CAP_CHOWN",
               "CAP_DAC_OVERRIDE",
               "CAP_FOWNER",
               "CAP_FSETID",
               "CAP_KILL",
               "CAP_NET_BIND_SERVICE",
               "CAP_SETFCAP",
               "CAP_SETGID",
               "CAP_SETPCAP",
               "CAP_SETUID",
               "CAP_SYS_CHROOT"
          ],
          "BoundingCaps": [
               "CAP_CHOWN",
               "CAP_DAC_OVERRIDE",
               "CAP_FOWNER",
               "CAP_FSETID",
               "CAP_KILL",
               "CAP_NET_BIND_SERVICE",
               "CAP_SETFCAP",
               "CAP_SETGID",
               "CAP_SETPCAP",
               "CAP_SETUID",
               "CAP_SYS_CHROOT"
          ],
          "ExecIDs": [
               "0d3ae09071b4ce63175a698ce6f5167263810be396d0f54d598cdc9f2f0ff069"
          ],
          "GraphDriver": {
               "Name": "overlay",
               "Data": {
                    "LowerDir": "/var/lib/containers/storage/overlay/fd457113597976542c1c6a4cff35f07a3223eaffb8de6858c5fe279473e0d0b5/diff:/var/lib/containers/storage/overlay/10703e188bf6cb913c3417c998d109ba94518f4046a34aec2020220b5862217c/diff:/var/lib/containers/storage/overlay/a1360aae5271bbbf575b4057cb4158dbdfbcae76698189b55fb1039bc0207400/diff",
                    "MergedDir": "/var/lib/containers/storage/overlay/62971b014a2ec336a98cc0b014e3c5203278e76155a17e90325998c0076ae705/merged",
                    "UpperDir": "/var/lib/containers/storage/overlay/62971b014a2ec336a98cc0b014e3c5203278e76155a17e90325998c0076ae705/diff",
                    "WorkDir": "/var/lib/containers/storage/overlay/62971b014a2ec336a98cc0b014e3c5203278e76155a17e90325998c0076ae705/work"
               }
          },
          "Mounts": [
               {
                    "Type": "bind",
                    "Source": "/mnt/c/Users/otavi/.ollama",
                    "Destination": "/root/.ollama",
                    "Driver": "",
                    "Mode": "",
                    "Options": [
                         "rbind"
                    ],
                    "RW": true,
                    "Propagation": "rprivate"
               }
          ],
          "Dependencies": [],
          "NetworkSettings": {
               "EndpointID": "",
               "Gateway": "10.88.0.1",
               "IPAddress": "10.88.0.4",
               "IPPrefixLen": 16,
               "IPv6Gateway": "",
               "GlobalIPv6Address": "",
               "GlobalIPv6PrefixLen": 0,
               "MacAddress": "d6:5c:3e:e7:f7:5a",
               "Bridge": "",
               "SandboxID": "",
               "HairpinMode": false,
               "LinkLocalIPv6Address": "",
               "LinkLocalIPv6PrefixLen": 0,
               "Ports": {
                    "11434/tcp": [
                         {
                              "HostIp": "",
                              "HostPort": "11434"
                         }
                    ]
               },
               "SandboxKey": "/run/netns/netns-b991c219-0147-f0a6-ab39-60852603f179",
               "Networks": {
                    "podman": {
                         "EndpointID": "",
                         "Gateway": "10.88.0.1",
                         "IPAddress": "10.88.0.4",
                         "IPPrefixLen": 16,
                         "IPv6Gateway": "",
                         "GlobalIPv6Address": "",
                         "GlobalIPv6PrefixLen": 0,
                         "MacAddress": "d6:5c:3e:e7:f7:5a",
                         "NetworkID": "podman",
                         "DriverOpts": null,
                         "IPAMConfig": null,
                         "Links": null,
                         "Aliases": [
                              "e77ec25f0ed3"
                         ]
                    }
               }
          },
          "Namespace": "",
          "IsInfra": false,
          "IsService": false,
          "KubeExitCodePropagation": "invalid",
          "lockNumber": 0,
          "Config": {
               "Hostname": "e77ec25f0ed3",
               "Domainname": "",
               "User": "",
               "AttachStdin": false,
               "AttachStdout": false,
               "AttachStderr": false,
               "Tty": false,
               "OpenStdin": false,
               "StdinOnce": false,
               "Env": [
                    "OLLAMA_HOST=0.0.0.0",
                    "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
                    "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
                    "PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                    "container=podman",
                    "HOME=/root",
                    "HOSTNAME=e77ec25f0ed3"
               ],
               "Cmd": [
                    "serve"
               ],
               "Image": "docker.io/ollama/ollama:0.1.20",
               "Volumes": null,
               "WorkingDir": "/",
               "Entrypoint": "/bin/ollama",
               "OnBuild": null,
               "Labels": {
                    "org.opencontainers.image.ref.name": "ubuntu",
                    "org.opencontainers.image.version": "22.04"
               },
               "Annotations": {
                    "io.container.manager": "libpod",
                    "io.podman.annotations.label": "disable",
                    "org.opencontainers.image.stopSignal": "15"
               },
               "StopSignal": 15,
               "HealthcheckOnFailureAction": "none",
               "CreateCommand": [
                    "C:\\Users\\otavi\\scoop\\apps\\podman\\current\\podman.exe",
                    "run",
                    "--device",
                    "nvidia.com/gpu=all",
                    "--security-opt",
                    "label=disable",
                    "--detach",
                    "--volume",
                    ".ollama:/root/.ollama",
                    "-p",
                    "11434:11434",
                    "--name",
                    "ollama-20",
                    "ollama/ollama:0.1.20"
               ],
               "Umask": "0022",
               "Timeout": 0,
               "StopTimeout": 10,
               "Passwd": true,
               "sdNotifyMode": "container"
          },
          "HostConfig": {
               "Binds": [
                    "/mnt/c/Users/otavi/.ollama:/root/.ollama:rw,rprivate,rbind"
               ],
               "CgroupManager": "cgroupfs",
               "CgroupMode": "host",
               "ContainerIDFile": "",
               "LogConfig": {
                    "Type": "journald",
                    "Config": null,
                    "Path": "",
                    "Tag": "",
                    "Size": "0B"
               },
               "NetworkMode": "bridge",
               "PortBindings": {
                    "11434/tcp": [
                         {
                              "HostIp": "",
                              "HostPort": "11434"
                         }
                    ]
               },
               "RestartPolicy": {
                    "Name": "",
                    "MaximumRetryCount": 0
               },
               "AutoRemove": false,
               "VolumeDriver": "",
               "VolumesFrom": null,
               "CapAdd": [],
               "CapDrop": [],
               "Dns": [],
               "DnsOptions": [],
               "DnsSearch": [],
               "ExtraHosts": [],
               "GroupAdd": [],
               "IpcMode": "shareable",
               "Cgroup": "",
               "Cgroups": "default",
               "Links": null,
               "OomScoreAdj": 0,
               "PidMode": "private",
               "Privileged": false,
               "PublishAllPorts": false,
               "ReadonlyRootfs": false,
               "SecurityOpt": [
                    "label=disable"
               ],
               "Tmpfs": {},
               "UTSMode": "private",
               "UsernsMode": "",
               "ShmSize": 65536000,
               "Runtime": "oci",
               "ConsoleSize": [
                    0,
                    0
               ],
               "Isolation": "",
               "CpuShares": 0,
               "Memory": 0,
               "NanoCpus": 0,
               "CgroupParent": "",
               "BlkioWeight": 0,
               "BlkioWeightDevice": null,
               "BlkioDeviceReadBps": null,
               "BlkioDeviceWriteBps": null,
               "BlkioDeviceReadIOps": null,
               "BlkioDeviceWriteIOps": null,
               "CpuPeriod": 0,
               "CpuQuota": 0,
               "CpuRealtimePeriod": 0,
               "CpuRealtimeRuntime": 0,
               "CpusetCpus": "",
               "CpusetMems": "",
               "Devices": [
                    {
                         "PathOnHost": "/dev/dxg",
                         "PathInContainer": "/dev/dxg",
                         "CgroupPermissions": ""
                    }
               ],
               "DiskQuota": 0,
               "KernelMemory": 0,
               "MemoryReservation": 0,
               "MemorySwap": 0,
               "MemorySwappiness": 0,
               "OomKillDisable": false,
               "PidsLimit": 2048,
               "Ulimits": [
                    {
                         "Name": "RLIMIT_NPROC",
                         "Soft": 4194304,
                         "Hard": 4194304
                    }
               ],
               "CpuCount": 0,
               "CpuPercent": 0,
               "IOMaximumIOps": 0,
               "IOMaximumBandwidth": 0,
               "CgroupConf": null
          }
     }
]

Seems relevant that the PATH includes NVIDIA and CUDA libraries.

otavio-silva avatar Jan 12 '24 01:01 otavio-silva

Did not include the rest of the output of the find command because it was taking a while, but it also includes the following locations:

/mnt/c/Windows/System32/DriverStore/FileRepository/nvmii.inf_amd64_649395c294ad3a68/libnvidia-ml.so.1
/mnt/c/Windows/System32/lxss/lib/libnvidia-ml.so.1

otavio-silva avatar Jan 12 '24 01:01 otavio-silva

Thanks so much @otavio-silva – looking into this!

jmorganca avatar Jan 12 '24 05:01 jmorganca

I've got some fixes that are already merged into main which will be in the next release (0.1.21) which will most likely resolve the difficulty discovering the nvidia-ml library. It may be a few days before we ship the next release, but if you'd like to try it out, I've pushed a container image to docker hub. dhiltgen/ollama:latest

If you do try, let me know how it goes. If it doesn't use the GPU as expected, please send the early log messages.

docker run --rm -it --gpus all dhiltgen/ollama:latest

For example, if I don't have a GPU present, the output looks something like this:

2024/01/12 17:19:31 routes.go:933: Listening on [::]:11434 (version 0.1.21-dh)
2024/01/12 17:19:31 payload_common.go:134: Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 cpu]
2024/01/12 17:19:31 payload_common.go:135: Override detection logic by setting OLLAMA_LLM_LIBRARY
2024/01/12 17:19:31 gpu.go:88: Detecting GPU type
2024/01/12 17:19:31 gpu.go:208: Searching for GPU management library libnvidia-ml.so
2024/01/12 17:19:31 gpu.go:253: Discovered GPU libraries: []
2024/01/12 17:19:31 gpu.go:208: Searching for GPU management library librocm_smi64.so
2024/01/12 17:19:31 gpu.go:253: Discovered GPU libraries: []
2024/01/12 17:19:31 cpu_common.go:18: CPU does not have vector extensions
2024/01/12 17:19:31 routes.go:956: no GPU detected

If I do have a GPU present, the output looks like this:

2024/01/12 17:27:03 routes.go:933: Listening on [::]:11434 (version 0.1.21-dh)
2024/01/12 17:27:04 payload_common.go:134: Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 cpu]
2024/01/12 17:27:04 payload_common.go:135: Override detection logic by setting OLLAMA_LLM_LIBRARY
2024/01/12 17:27:04 gpu.go:88: Detecting GPU type
2024/01/12 17:27:04 gpu.go:208: Searching for GPU management library libnvidia-ml.so
2024/01/12 17:27:04 gpu.go:253: Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]
2024/01/12 17:27:04 gpu.go:94: Nvidia GPU detected
2024/01/12 17:27:04 gpu.go:135: CUDA Compute Capability detected: 7.5

dhiltgen avatar Jan 12 '24 17:01 dhiltgen

@dhiltgen tried the image on Docker Hub using the command podman run --device nvidia.com/gpu=all --security-opt label=disable --detach --volume .ollama:/root/.ollama -p 11434:11434 --name ollama-21-pre dhiltgen/ollama:latest and then podman exec -it ollama-21-pre ollama run llama2-uncensored, had the error from the start of the issue:

Error: Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama2216054073/cpu_avx2/libext_server.so: undefined symbol: _ZTVN10__cxxabiv117__

The logs are as follows:

2024/01/12 17:37:20 images.go:809: total blobs: 31
2024/01/12 17:37:21 images.go:816: total unused blobs removed: 0
2024/01/12 17:37:21 routes.go:933: Listening on [::]:11434 (version 0.1.21-dh)
2024/01/12 17:37:21 payload_common.go:134: Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11]
2024/01/12 17:37:21 payload_common.go:135: Override detection logic by setting OLLAMA_LLM_LIBRARY
2024/01/12 17:37:21 gpu.go:88: Detecting GPU type
2024/01/12 17:37:21 gpu.go:208: Searching for GPU management library libnvidia-ml.so
2024/01/12 17:37:21 gpu.go:253: Discovered GPU libraries: []
2024/01/12 17:37:21 gpu.go:208: Searching for GPU management library librocm_smi64.so
2024/01/12 17:37:21 gpu.go:253: Discovered GPU libraries: []
2024/01/12 17:37:21 cpu_common.go:11: CPU has AVX2
2024/01/12 17:37:21 routes.go:956: no GPU detected
[GIN] 2024/01/12 - 17:37:54 | 200 |      16.775µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/01/12 - 17:37:55 | 200 |  260.774745ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/01/12 - 17:38:13 | 200 |      12.595µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/01/12 - 17:38:13 | 200 |   15.523178ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/01/12 - 17:38:13 | 200 |   12.878023ms |       127.0.0.1 | POST     "/api/show"
2024/01/12 17:38:29 cpu_common.go:11: CPU has AVX2
2024/01/12 17:38:29 cpu_common.go:11: CPU has AVX2
2024/01/12 17:38:29 llm.go:70: GPU not available, falling back to CPU
2024/01/12 17:38:29 cpu_common.go:11: CPU has AVX2
2024/01/12 17:38:29 dyn_ext_server.go:384: Updating LD_LIBRARY_PATH to /tmp/ollama2216054073/cpu_avx2:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2024/01/12 17:38:29 llm.go:144: Failed to load dynamic library /tmp/ollama2216054073/cpu_avx2/libext_server.so  Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama2216054073/cpu_avx2/libext_server.so: undefined symbol: _ZTVN10__cxxabiv117__
[GIN] 2024/01/12 - 17:38:29 | 500 | 16.710503883s |       127.0.0.1 | POST     "/api/generate"

I think it's relevant to note that podman exec -it ollama-21-pre nvidia-smi gives the following:

Fri Jan 12 17:37:36 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.36                 Driver Version: 546.33       CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3080 ...    On  | 00000000:01:00.0 Off |                  N/A |
| N/A   53C    P0              32W / 175W |      0MiB / 16384MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

otavio-silva avatar Jan 12 '24 17:01 otavio-silva

Thanks for trying!

Let me think about how best to approach finding the root cause for this issue. I may need to create a more verbose debug build that dumps out a lot more discovery information to try to understand what the bug is.

dhiltgen avatar Jan 12 '24 17:01 dhiltgen

@dhiltgen let me know if there's anything I can do to help.

otavio-silva avatar Jan 12 '24 18:01 otavio-silva

We've got a few more things we want to merge before 0.1.21 is ready, but once we have a pre-release, I'll generate a more verbose docker image that will hopefully just work, but worst case, will yield more information about what it tried so we can get to the root cause.

dhiltgen avatar Jan 15 '24 20:01 dhiltgen

The pre-release for 0.1.21 should be out shortly. I've pushed an updated image to docker hub that has the ability to report a little more debugging information which might help us understand what it's trying and failing to load. You can give it a try with something along these lines:

docker run --rm -it --gpus all -e OLLAMA_DEBUG=1 dhiltgen/ollama:0.1.21-rc

Hopefully it will just work, but if not, please paste the log output into this issue so I can see what it's trying.

dhiltgen avatar Jan 18 '24 22:01 dhiltgen

@dhiltgen just tested it, it works but it's not using the GPU. The logs are as follows:

time=2024-01-18T22:51:44.392Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/server/routes.go:900 msg="Debug logging enabled"
time=2024-01-18T22:51:44.407Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/images.go:810 msg="total blobs: 31"
time=2024-01-18T22:51:44.796Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/images.go:817 msg="total unused blobs removed: 0"
time=2024-01-18T22:51:45.022Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/routes.go:924 msg="Listening on [::]:11434 (version 0.1.21-rc)"
time=2024-01-18T22:51:45.022Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-01-18T22:52:27.755Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:145 msg="Dynamic LLM libraries [cuda_v11 cpu_avx2 cpu_avx cpu]"
time=2024-01-18T22:52:27.755Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-01-18T22:52:27.755Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:89 msg="Detecting GPU type"
time=2024-01-18T22:52:27.755Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:209 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-01-18T22:52:27.755Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:227 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]"
time=2024-01-18T22:52:27.756Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:255 msg="Discovered GPU libraries: []"
time=2024-01-18T22:52:27.756Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:209 msg="Searching for GPU management library librocm_smi64.so"
time=2024-01-18T22:52:27.756Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:227 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /usr/local/nvidia/lib/librocm_smi64.so* /usr/local/nvidia/lib64/librocm_smi64.so*]"
time=2024-01-18T22:52:27.756Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:255 msg="Discovered GPU libraries: []"
time=2024-01-18T22:52:27.756Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/cpu_common.go:11 msg="CPU has AVX2"
time=2024-01-18T22:52:27.756Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/routes.go:947 msg="no GPU detected"
[GIN] 2024/01/18 - 22:52:27 | 200 |       22.15µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/01/18 - 22:52:27 | 200 |   48.414843ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/01/18 - 22:52:27 | 200 |   21.743126ms |       127.0.0.1 | POST     "/api/show"
time=2024-01-18T22:52:48.891Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/cpu_common.go:11 msg="CPU has AVX2"
time=2024-01-18T22:52:48.891Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/cpu_common.go:11 msg="CPU has AVX2"
time=2024-01-18T22:52:48.891Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/llm.go:76 msg="GPU not available, falling back to CPU"
time=2024-01-18T22:52:48.898Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1302817813/cpu_avx2/libext_server.so"
time=2024-01-18T22:52:48.898Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:139 msg="Initializing llama server"
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:6aa74acf170f8fb8e6ff8dae9bc9ea918d3a14b6ba95d0b0287da31b09a4848c (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = georgesung
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = georgesung
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size       =    0.11 MiB
llm_load_tensors: system memory used  = 3647.98 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 159.19 MiB
time=2024-01-18T22:54:55.854Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:147 msg="Starting llama main loop"
[GIN] 2024/01/18 - 22:54:55 | 200 |         2m28s |       127.0.0.1 | POST     "/api/chat"
time=2024-01-18T22:55:54.717Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:161 msg="loaded 0 images"
[GIN] 2024/01/18 - 22:56:04 | 200 | 10.188464677s |       127.0.0.1 | POST     "/api/chat"
time=2024-01-18T22:56:24.122Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:161 msg="loaded 0 images"
[GIN] 2024/01/18 - 22:56:39 | 200 | 14.927684091s |       127.0.0.1 | POST     "/api/chat"

otavio-silva avatar Jan 18 '24 22:01 otavio-silva

Also, just confirming that the container can see the GPU, running podman exec -it ollama-21-pre nvidia-smi -L gives:

GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU (UUID: GPU-40185f85-797c-c692-67ed-47684f169670)

otavio-silva avatar Jan 18 '24 23:01 otavio-silva

Strange. The log line "gpu management search paths" shows the glob's we're trying to locate, and one of those is /usr/lib/wsl/lib/libnvidia-ml.so* which should have matched the path you mentioned in comment /usr/lib/wsl/lib/libnvidia-ml.so.1

The next line "Discovered GPU libraries" shows the files we found based on those wildcard searches before we try to actually load them, and the empty list there implies none of the glob's matched a file. You could try to exec into the container and ls -l /usr/lib/wsl/lib/libnvidia-ml.so* and maybe look at the parent directories all the way up to the root and check their ownership/permission. Also confirm which user the ollama serve is running as. I'm wondering if maybe there's a user or permission problem where some directory isn't readable leading to the glob failing even though the file itself is readable?

Another thing to try (not as a fix but an experiment) is to force it to load the cuda llm library even though it can't discover the GPU. That will bypass GPU memory checks and isn't really a solution (try to load a large model and it will crash), but maybe it would show us if the GPU enabled code will work once we get past the management library loading failure.

docker run --rm -it --gpus all -e OLLAMA_DEBUG=1 -e OLLAMA_LLM_LIBRARY=cuda_v11 dhiltgen/ollama:0.1.21-rc

dhiltgen avatar Jan 19 '24 00:01 dhiltgen

@dhiltgen upon using the comand in here but now from inside the container with podman exec -it ollama-pre-21 find / -name 'libnvidia-ml.so*' 2>/dev/null, it returns nothing. If running inside the podman machine (the WSL2 Fedora distro), with the command:

podman run --device nvidia.com/gpu=all --security-opt label=disable --detach --volume /mnt/c/Users/otavi/.ollama:/root/.ollama --volume /usr/lib/wsl/lib/:/usr/lib/wsl/lib/ -p 11434:11434 -e OLLAMA_DEBUG=1 --name ollama-21-pre dhiltgen/ollama:0.1.21-rc

gives the output:

time=2024-01-19T02:51:59.943Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/server/routes.go:900 msg="Debug logging enabled"
time=2024-01-19T02:51:59.946Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/images.go:810 msg="total blobs: 31"
time=2024-01-19T02:52:00.078Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/images.go:817 msg="total unused blobs removed: 0"
time=2024-01-19T02:52:00.183Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/routes.go:924 msg="Listening on [::]:11434 (version 0.1.21-rc)"
time=2024-01-19T02:52:00.184Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-01-19T02:52:29.874Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx2 cpu_avx cpu cuda_v11]"
time=2024-01-19T02:52:29.874Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-01-19T02:52:29.874Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:89 msg="Detecting GPU type"
time=2024-01-19T02:52:29.874Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:209 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-01-19T02:52:29.874Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:227 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]"
time=2024-01-19T02:52:29.876Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:255 msg="Discovered GPU libraries: [/usr/lib/wsl/lib/libnvidia-ml.so.1]"
time=2024-01-19T02:52:31.975Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:95 msg="Nvidia GPU detected"
time=2024-01-19T02:52:31.985Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:136 msg="CUDA Compute Capability detected: 8.6"

It's important to note that the --volume /usr/lib/wsl/lib/:/usr/lib/wsl/lib/ portion of the command is what actually does the magic, and it will not work otherwise. The problem now seems that the container does not have libnvidia-ml.so by itself, I don't know how to fix it.

otavio-silva avatar Jan 19 '24 03:01 otavio-silva

The problem now seems that the container does not have libnvidia-ml.so by itself, I don't know how to fix it.

This is starting to seem like a variation between podman and dockers GPU support. I don't have a podman system handy, but this library get's automatically mounted into the image when you use the --gpu flag on docker. For example:

Without GPU's passed in

% docker run --rm -it --entrypoint find dhiltgen/ollama:0.1.21-rc / -name libnvidia-ml.so\*
%

With GPUs passed in

% docker run --rm -it --gpus all --entrypoint find dhiltgen/ollama:0.1.21-rc / -name libnvidia-ml.so\*
/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08
/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
%

I don't believe we're "supposed" to build in this library, as it needs to match the driver on the underlying system, so if we embedded it into the image it would only work for a narrow band of drivers.

dhiltgen avatar Jan 19 '24 17:01 dhiltgen

Digging around in the nvidia container runtime docs, I'm wondering if you missed this setup step: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html

Grep'ing through the config on my linux system, I see it is where this library gets wired up to mount.

% grep nvidia-ml /etc/cdi/nvidia.yaml
  - containerPath: /lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08
    hostPath: /lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08

dhiltgen avatar Jan 19 '24 17:01 dhiltgen

@dhiltgen I have the NVIDIA Container Toolkit configured already, I have to use Podman on Windows because the Docker binary that has GPU support is actually proprietary and ships with the Docker Desktop software. Running the command grep nvidia-ml /etc/cdi/nvidia.yaml it gives the output:

  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml.so.1
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml.so.1
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml_loader.so
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml_loader.so

Winch is similar to yours, but it has a weird name. They where genrated by the nvidia-ctk cdi generate command. And the contents of the nvidia.yml are as follows:

---
cdiVersion: 0.3.0
containerEdits:
  hooks:
  - args:
    - nvidia-ctk
    - hook
    - create-symlinks
    - --link
    - /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/nvidia-smi::/usr/bin/nvidia-smi
    hookName: createContainer
    path: /usr/bin/nvidia-ctk
  - args:
    - nvidia-ctk
    - hook
    - update-ldcache
    - --folder
    - /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce
    - --folder
    - /usr/lib/wsl/lib
    hookName: createContainer
    path: /usr/bin/nvidia-ctk
  mounts:
  - containerPath: /usr/lib/wsl/lib/libdxcore.so
    hostPath: /usr/lib/wsl/lib/libdxcore.so
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libcuda.so.1.1
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libcuda.so.1.1
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libcuda_loader.so
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libcuda_loader.so
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml.so.1
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml.so.1
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml_loader.so
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml_loader.so
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ptxjitcompiler.so.1
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ptxjitcompiler.so.1
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/nvcubins.bin
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/nvcubins.bin
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/nvidia-smi
    hostPath: /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/nvidia-smi
    options:
    - ro
    - nosuid
    - nodev
    - bind
devices:
- containerEdits:
    deviceNodes:
    - path: /dev/dxg
  name: all
kind: nvidia.com/gpu

Winch shows NVIDIA hooks for containers. Maybe Ollama could use those hooks to get the necessary libraries?

otavio-silva avatar Jan 19 '24 19:01 otavio-silva

Into some investigation, I figured it out that inside the container, /usr/lib/wsl/drivers has a folder called nvmii.inf_amd64_93ca473c6557c9ce, witch has the following:

libcuda.so.1    libcuda_loader.so  libnvidia-ml_loader.so         nvcubins.bin
libcuda.so.1.1  libnvidia-ml.so.1  libnvidia-ptxjitcompiler.so.1  nvidia-smi

Running podman exec -it ollama-21-pre ls /usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce confirms the result. The weird name changes for each driver update, maybe a regex for searching the libnvidia-ml.so* inside the drivers folder can solve the issue?

otavio-silva avatar Jan 19 '24 20:01 otavio-silva

Strange dir pattern, but yes, adding another wildcard to our set is pretty easy. Let me get a PR up and push a docker image for you to test with that new pattern. 🤞

dhiltgen avatar Jan 19 '24 21:01 dhiltgen

OK, give dhiltgen/ollama:0.1.21-rc2 a try. It should now look for /usr/lib/wsl/drivers/*/libnvidia-ml.so* as well.

dhiltgen avatar Jan 19 '24 21:01 dhiltgen

@dhiltgen I'm glad to say it works, as shown by the logs:

time=2024-01-19T21:33:34.124Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/server/routes.go:919 msg="Debug logging enabled"
time=2024-01-19T21:33:34.130Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/images.go:810 msg="total blobs: 31"
time=2024-01-19T21:33:34.337Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/images.go:817 msg="total unused blobs removed: 0"
time=2024-01-19T21:33:34.516Z level=INFO source=/go/src/github.com/jmorganca/ollama/server/routes.go:943 msg="Listening on [::]:11434 (version 0.1.21-rc2)"
time=2024-01-19T21:33:34.517Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-01-19T21:33:39.096Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx cuda_v11 cpu_avx2 cpu]"
time=2024-01-19T21:33:39.096Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/llm/payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-01-19T21:33:39.096Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:91 msg="Detecting GPU type"
time=2024-01-19T21:33:39.096Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:210 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-01-19T21:33:39.096Z level=DEBUG source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:228 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]"
time=2024-01-19T21:33:39.097Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:256 msg="Discovered GPU libraries: [/usr/lib/wsl/drivers/nvmii.inf_amd64_93ca473c6557c9ce/libnvidia-ml.so.1]"
time=2024-01-19T21:33:41.180Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:96 msg="Nvidia GPU detected"
time=2024-01-19T21:33:41.193Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:137 msg="CUDA Compute Capability detected: 8.6"
[GIN] 2024/01/19 - 21:33:44 | 200 |      25.042µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/01/19 - 21:33:44 | 200 |   13.764227ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/01/19 - 21:33:44 | 200 |   14.951497ms |       127.0.0.1 | POST     "/api/show"
time=2024-01-19T21:34:02.974Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:137 msg="CUDA Compute Capability detected: 8.6"
time=2024-01-19T21:34:02.974Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:137 msg="CUDA Compute Capability detected: 8.6"
time=2024-01-19T21:34:02.974Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/cpu_common.go:11 msg="CPU has AVX2"
time=2024-01-19T21:34:02.986Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3478254322/cuda_v11/libext_server.so"
time=2024-01-19T21:34:02.986Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:139 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:6aa74acf170f8fb8e6ff8dae9bc9ea918d3a14b6ba95d0b0287da31b09a4848c (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = georgesung
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = georgesung
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size       =    0.11 MiB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: system memory used  =   70.42 MiB
llm_load_tensors: VRAM used           = 3577.55 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: VRAM kv self = 1024.00 MB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 159.19 MiB
llama_new_context_with_model: VRAM scratch buffer: 156.00 MiB
llama_new_context_with_model: total VRAM used: 4757.56 MiB (model: 3577.55 MiB, context: 1180.00 MiB)
time=2024-01-19T21:35:30.381Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:147 msg="Starting llama main loop"
[GIN] 2024/01/19 - 21:35:30 | 200 |         1m45s |       127.0.0.1 | POST     "/api/chat"
time=2024-01-19T21:35:58.542Z level=INFO source=/go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:161 msg="loaded 0 images"
[GIN] 2024/01/19 - 21:36:03 | 200 |   4.52558864s |       127.0.0.1 | POST     "/api/chat"

otavio-silva avatar Jan 19 '24 21:01 otavio-silva