llama-gpt
llama-gpt copied to clipboard
No way to get 70b model from Umbrel app store?
I've got a machine with 80GB of RAM that I'd like to try the 70B param model on, using the UmbrelOS app if possible. But I don't see a way to get that model.
Echo that. Hopefully the next release on Umbrel store will provide that feature. Thanks!
Thanks, guys! Noted for a future release. We'll make it easy to change the model within the app, with recommendations based on the underlying CPU, GPU and RAM size.