Docker image with Ollama installed very slow
Hi My system has a NVDIA 1070 8 GB GPU card running linux mint
I used the latest Docker images as installed on the 15th May
- I have installed the Docker image with Ollama installed and it runs very slow.
- I then installed the Docker image with Ollama and Open WebUI seperately and it runs the fastest.
- I then ran the same prompt directly on the Ollama terminal and it had similar speed.
The prompt was : Write me a python code for the game centipede Model LLama 3:lastest
- The code generated by Ollama directly and with a seperate Ollama/Open WebUI docker image was the same and used the pygame library. I did not test the code but is used clasees, and more advanced code
- The code generated by the combined Olllama/Webui docker image create a very small code using no libraries and I doubt it would have run.
The error is that the combined Ollama/OpenWebUI docker miage creates carbage and is a lot slower.
Error with ModelFilles Also I coulld not import the modelfiles from the website. It just does nothing. Also when you import the model it looks for ".json" files but the downloaded modelfile have a ".txt" extension. I would presume the error is significant. All model files are on the recommended home page of the website and should be compatible.
Here is the code generated Ollama direct Inference Lama3.txt Open WEBUI and Ollama Seperated Docker images.txt OpenWebUI with Ollama InStalled Inference Lama3.txt
GPU memory used in the combined version was lower than in the separate Ollama docker images. It also used CPU cores.