WW1983

Results 21 comments of WW1983

Thank you. things are going a little better with your tips. But still not good. I think I'll wait a little longer. With version 2024.7, Ollama may be better integrated.

> llama3-8b-function-call-v0.2 Thank you. Use Ollama on a Minisforum MS01 with a Tesla P4 Nvidia GPU. So it should work. Is there also a model for Ollama?

> Have not really played with Ollama, but if it supports GGUF models, my guess would be that you can use this one (literally first link in google) - https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2...

Thank you. I try it now with Ubuntu 24.04. and it works. How to connect it with Home Assitat? What selection do I have to make? "Ollama AI"?

> **Backend:** Generic OpenAI **Host:** IP of machine LocalAI is running on **Port:** Port, which you exposed from Docker Compose file **Model:** LocalAI-llama3-8b-function-call-v0.2 Thank you. It works. Where can I...

> You can do that in: Settings -> Voice Assistans -> _YOUR_PIPELINE_ -> 3 dots -> Debug Have it. But i think my System ist a litle bit slow for...

Thank you for all your tips. But I think that's a bit too much for me. I just wanted to build something small to experiment. That should be enough at...

I still have a problem. Despite updates. Has anyone had the same problem and has a solution?

> Geplant ist alles wozu es einen Feature Request gibt (sofern nicht total abwegig) :-) Vielleicht habe ich es ja übersehen :) Gibt es ein Request für Hintergrundbilder? Sonst mache...

Meinte sowohl, als auch. Im Grunde hast du recht. Über CSS geht es. Dachte man könnte eine "benutzerfreundlichere Möglichkeit" schaffen. War aber nur eine Idee. CSS ist vollkommen ausreichend