PietFourie
PietFourie
Here is the code generated [Ollama direct Inference Lama3.txt](https://github.com/open-webui/open-webui/files/15331931/Ollama.direct.Inference.Lama3.txt) [Open WEBUI and Ollama Seperated Docker images.txt](https://github.com/open-webui/open-webui/files/15331932/Open.WEBUI.and.Ollama.Seperated.Docker.images.txt) [OpenWebUI with Ollama InStalled Inference Lama3.txt](https://github.com/open-webui/open-webui/files/15331933/OpenWebUI.with.Ollama.InStalled.Inference.Lama3.txt)
GPU memory used in the combined version was lower than in the separate Ollama docker images. It also used CPU cores.
I do not know if I had the same problem but ..... I was also following the basic tutorial and created a custom component from a fileExplorer component. It did...
Got another error on linux during build. I got "permission" problems I had to install the following ...... sudo pip install trove-classifiers tomli pluggy pathspec packaging hatchling hatch-requirements-txt hatch-fancy-pypi-readme