whisper.cpp
whisper.cpp copied to clipboard
Whisper v1.7.4 - Docker - GPU
Hi,
- Goal: use Whisper Docker on GPU.
-
Issue: GPU is not used by Whisper.
-
nvidia-smi
is recognized using:-
$ docker run --gpus 1 --rm whisper_gpu:latest nvidia-smi
-
$ docker run --gpus 1 --rm -it whisper_gpu:latest bash
.
-
-
-
Additional information:
nvidia-container-toolkit
is installed andruntime
is configured forNvidia
.
The following represents the logs of nvidia-smi
(out and in the docker image):
$ docker run --gpus 1 --rm whisper_gpu:latest nvidia-smi
Sat Jan 25 18:07:38 2025
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.04 Driver Version: 450.119.04 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:06:00.0 Off | 0 |
| N/A 32C P0 71W / 300W | 6216MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
$ docker run --gpus 1 --rm -it whisper_gpu:latest bash
root@3761b2adce04:/app# nvidia-smi
Sat Jan 25 17:50:45 2025
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.04 Driver Version: 450.119.04 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:06:00.0 Off | 0 |
| N/A 31C P0 57W / 300W | 6216MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
The following represents the logs where use gpu = 1
while whisper_backend_init_gpu: no GPU found
:
docker_command: docker run -it --runtime=nvidia --gpus 1 --rm \
-v ./models:/models \
-v ./audios:/audios \
-v ./outputs:/outputs \
whisper_gpu:latest
"./build/bin/whisper-cli \
-t 40 -p 1 \
-m /models/ggml-medium.bin \
-f /audios/audio_converted.wav \
-otxt -of /outputs/audio_converted.wav"
whisper_init_from_file_with_params_no_state: loading model from '/models/ggml-medium.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_init_with_params_no_state: devices = 1
whisper_init_with_params_no_state: backends = 1
whisper_model_load: loading model
whisper_model_load: n_vocab = 51865
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1024
whisper_model_load: n_audio_head = 16
whisper_model_load: n_audio_layer = 24
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1024
whisper_model_load: n_text_head = 16
whisper_model_load: n_text_layer = 24
whisper_model_load: n_mels = 80
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 4 (medium)
whisper_model_load: adding 1608 extra tokens
whisper_model_load: n_langs = 99
whisper_model_load: CPU total size = 1533.14 MB
whisper_model_load: model size = 1533.14 MB
whisper_backend_init_gpu: no GPU found
whisper_init_state: kv self size = 50.33 MB
whisper_init_state: kv cross size = 150.99 MB
whisper_init_state: kv pad size = 6.29 MB
whisper_init_state: compute buffer (conv) = 28.55 MB
whisper_init_state: compute buffer (encode) = 170.15 MB
whisper_init_state: compute buffer (cross) = 7.72 MB
whisper_init_state: compute buffer (decode) = 98.19 MB
system_info: n_threads = 40 / 80 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | COREML = 0 | OPENVINO = 0 |
main: processing '/audios/audio_converted.wav' (113168 samples, 7.1 sec), 40 threads, 1 processors, 5 beams + best of 5, lang = fr, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:01.000] 1
[00:00:01.000 --> 00:00:02.500] 1
[00:00:02.500 --> 00:00:03.500] 1
[00:00:03.500 --> 00:00:04.500] 1
whisper_print_timings: load time = 919.18 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 6.32 ms
whisper_print_timings: sample time = 51.54 ms / 92 runs ( 0.56 ms per run)
whisper_print_timings: encode time = 4424.63 ms / 1 runs ( 4424.63 ms per run)
whisper_print_timings: decode time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: batchd time = 791.80 ms / 90 runs ( 8.80 ms per run)
whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: total time = 6557.43 ms
Thanks!