grok-1
grok-1 copied to clipboard
What's the lowest cost way for enthusiasts to run this model?
Pay-as-you-go large GPU server
Pay-as-you-go large GPU server
like Colab?
Pay-as-you-go wouldn't be hosted locally and is costly. You have better options:
- You might as well use an API if going for a non-local install.
- Rent GPU hardware from local providers or through specialized online services that offer physical hardware for short-term projects.
- Shared computing platforms allow you to use shared GPU resources for computing tasks. This can be more affordable than dedicated servers.
Just buy 2x A100 LMAO
Please close this issue and move to:
https://github.com/xai-org/grok-1/discussions
Reason: #69 #108
To answer this question we have to get official answer for https://github.com/xai-org/grok-1/issues/62 issue first.
Just buy 2x A100 LMAO
My nodes have 4 and it's CLEARLY not enough for this thing.
@surak What GPU's did you use and what size? I'm going to look at trying to run it on 8 x 32GB V100 or 4 x 64GB Xilinx VU9p. But I'm wondering if that will even be enough.
@surak What GPU's did you use and what size? I'm going to look at trying to run it on 8 x 32GB V100 or 4 x 64GB Xilinx VU9p. But I'm wondering if that will even be enough.
Most of my compute nodes have 4xA100 40gb. I had trouble also running in the grace hoppper 200 with 480gb, but there the problem was different.
I have got my hands on 8 x A100 80GB GPU's. I'll have a look at trying it out this evening and let you know if it works.
*Stable Horde is a Free crowdsourced distributed cluster for Stable Diffusion https://github.com/Haidra-Org/AI-Horde https://grafana.aihorde.net/d/decfb2fc-3165-4625-b8cd-c0e94220d5ad/landing-page?orgId=1
*https://io.net/