Schwenn2002

Results 20 comments of Schwenn2002

Another Link: https://rocm.blogs.amd.com/artificial-intelligence/sentence_transformers_amd/README.html

If I rebuild the Docker with the attached Dockerfile (docker-compose up --build open-webui-rocm) and then call the embedding model via the console, it is loaded with GPU, obviously ROCm in...

**Additional information:** For python 3.11 you need the installation package for ROCm6.3. For torch2.5.1 there are only packages for ROCm6.2. According to my research, ROCm is compatible. This is also...

Thank you very much, now the Docker from open-webui actually runs with ROCm. **Perhaps an open-webui:rocm can be built?** The adjustments are specified in the Dockerfile above (usecase=rocm should be...

I have already customized Docker and integrated ROCm with the Dockerfile mentioned above. It would only be good for updates if I didn't have to do a rebuild every time...

Attached are my updated files, the Docker must then be started with docker-compose up -d --build! **Testing ROCm in a container:** ``` docker exec -it open-webui-rocm /bin/bash root@3ac111a1e730:/app/backend# python Python...

I am currently using gfx1100 (Radeon 7900xtx and Radeon Pro W7900 in multigpu setup) and the above configuration is working.

The host system must also have ROCm installed (test with rocm-smi), then change the following line for Docker, since there is only one GPU in the system: `` - 'ROCR_VISIBLE_DEVICES=0'...

My host is running Ubuntu 24.04 LTS (Noble Numbat) and ROCm 6.3.1; the Docker is Debian 12 (hence jammy in open-webui docker).

Hi! Yes, I tried that; ollama is significantly slower when it comes to embeddings or searching in RAG. ROCm in Docker is the choice for best performance.