extensions-source
extensions-source copied to clipboard
MangaTales : Source request
Source name
MangaTales
Source link
https://www.mangatales.com/
Source language
Arabic
Other details
No response
Acknowledgements
- [X] I have checked that the extension does not already exist by searching the GitHub repository and verified it does not appear in the code base.
- [X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open or closed issue.
- [X] I have written a meaningful title with the source name.
- [X] I will fill out all of the requested information in this form.
Encountered this exact error output when using Ollama on a laptop with an RTX 3070. Ollama was ran using Docker compose and was using the codellama model when I encountered this error. The same error occured when attempting to use the llama2 model.
@giansegato we've fixed a number of CUDA related bugs since version 0.1.19. I'm not sure if that will fix the problem you're facing, but please give the latest release a try. (make sure to re-pull or specify tag 0.1.22
)
I actually solved this issue on my laptop with a simple driver update. Ollama is now running as expected with no other changes made to the config/setup.
That's great to hear @retrokit-max!
@giansegato can you give that approach a shot as well as upgrading to 0.1.22 and see if your problem is resolved?
@giansegato please let us know if you're still having problems.
Thanks y'all. For the record, I tried again and couldn't reproduce anymore! 🥳
Im having the same erre
ollama_api:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollama
restart: always
networks:
traefik:
labels:
com.centurylinklabs.watchtower.enable: 'true'
com.centurylinklabs.watchtower.scope: hertz-lab
traefik.enable: 'true'
traefik.http.routers.json2-flatware.rule: Host(ollamadocker.flatware.hertz-lab.zkm.de
)
traefik.http.routers.json2-flatware.entryPoints: websecure
traefik.http.routers.json2-flatware.tls: 'true'
traefik.http.routers.json2-flatware.tls.certresolver: letsencrypt
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
The error logs
time=2024-03-27T16:23:25.129Z level=INFO source=images.go:806 msg="total blobs: 16" time=2024-03-27T16:23:25.129Z level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-27T16:23:25.130Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)" time=2024-03-27T16:23:25.130Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2800678677/runners ..." time=2024-03-27T16:23:30.109Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx2 cpu cuda_v11 rocm_v60000 cpu_avx]" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.15]" time=2024-03-27T16:23:30.118Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected" time=2024-03-27T16:23:30.118Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-27T16:23:30.125Z level=INFO source=gpu.go:109 msg="error looking up CUDA GPU memory: device memory info lookup failure 0: 4" time=2024-03-27T16:23:30.125Z level=INFO source=routes.go:1133 msg="no GPU detected"
@Yaffa16 error looking up CUDA GPU memory: device memory info lookup failure 0: 4
-- error code 4 from CUDA relates to drivers being unloaded. I'd suggest trying to get nvidia-smi
to work inside a container to confirm you have your container runtime set up correctly, and if that works and ollama is still unable to discover the GPU with the latest version, please open a new issue with your server logs so we can investigate.
@Yaffa16
error looking up CUDA GPU memory: device memory info lookup failure 0: 4
-- error code 4 from CUDA relates to drivers being unloaded. I'd suggest trying to getnvidia-smi
to work inside a container to confirm you have your container runtime set up correctly, and if that works and ollama is still unable to discover the GPU with the latest version, please open a new issue with your server logs so we can investigate.
Hi ihave openend an issue. here: https://github.com/ollama/ollama/issues/3647