gpt-fast
gpt-fast copied to clipboard
Apple Silicon support?
Any plans to support Apple chips?
The code works on M1 with a few simple changes from cuda calls to mps calls but the issue is there is no inductor support for MPS yet so the benefit from torch.compile are not as pronounced
any updates on MPS / Mac Silicone support ?
Anything new?