Nick Bento
Nick Bento
It's usually served by the HA server on port 8123, like http://homeassistant.local:8123/lovelace
So I think it is specific to the docker running (I think the frigate addon can do it for example, as well as the go2rtc addon used for camera transcode...
One other thing I realized, I wouldn't be able to try out other models anyway as I don't think they are part of the docker image 😆 Guess the same...
If you are running latest WIS server, chatbot was removed, so these are likely pointing to that fact. 🙂
Adding here, i realized i was thinking of the split_arch branch, it appears main still has support for chatbot (but this will go away soon in favor of using some...
Discussed in #78 but also casting my vote here for Coqui 😄
Think bark was looked at but the performance unfortunately isn't where it needs to be for a good user experience for a voice assistant (TTS generation on an enterprise GPU...
Yes, we have some new engines in process. They aren't in the main branch yet, but you can experiment with them in the feature/split_arch branch.
In terms of compute it would probably be fine, just note the 6GB VRAM could get a little tight if you want to use the large model for instance, as...
The system linked may do fine, like I said in terms of compute it would outperform a 1070, it just comes down to VRAM. Given your goals I am not...