Mayank Chhabra
Mayank Chhabra
Unfortunately as @highghlow noted, we have decided not to support it in favor of offering increased system reliability and stability. Here's the full reasoning from our [community forum post](https://community.umbrel.com/t/how-to-update-from-umbrelos-0-5-to-umbrelos-1-1-on-linux-devices/16704) so...
That's interesting! @LiamKarlMitchell can you please run this command on your Mac and share the output: ``` sysctl -n machdep.cpu && uname -m ```
Currently the replies are already streamed one word at a time. I wonder if the first word's taking a lot of time for you? In that case, consider running the...
@AndreiSva and @WEBELSYS can you please share which model you're trying, and the specs of your hardware (OS, CPU, RAM)?
That's a great observation @Aincvy. For anyone facing this issue, can you please confirm if you're using LlamaGPT behind a reverse proxy, like nginx? If so, would be great if...
Hey folks, since the UI is forked from Chatbot UI, enabling light mode would be relatively straightforward. We can get this out in the next release!
Thanks, guys! Noted for a future release. We'll make it easy to change the model within the app, with recommendations based on the underlying CPU, GPU and RAM size.
Totally understand the reasoning. Unfortunately this will be on a lower priority for us for now, considering most users using LlamaGPT are running it on umbrelOS, which automatically provides an...
That generation speed is indeed very slow considering your hardware. Can you confirm if you're running this with Docker on the host and not inside a VM? Can you also...
That is weird. I've not investigated much into why it's only detecting 2 cores and not 12. For a quick workaround, you might be able to speed it up by...