Add support for Apple Silicon (torch.mps)
This PR adds support for Apple Silicon backend (torch.mps).
- Added support for torch.mps, enabling execution on Apple Silicon devices.
- Updated the device detection order from (cuda -> cpu) to (cuda -> mps -> cpu).
This PR is expected to resolve #187.
@microsoft-github-policy-service agree
Has anyone successfully used the GPU of the Apple M1 self-developed chip in the MacBook? I still can't use GPU acceleration properly here
@zhenhuaplan
Could you run the following Python script and let me know the output?
import torch
print("torch.backends.mps:", torch.backends.mps.is_available())
print("torch.mps:", torch.mps.is_available())
On my environment, the output is as follows:
>>> print("torch.backends.mps:", torch.backends.mps.is_available())
torch.backends.mps: True
>>> print("torch.mps:", torch.mps.is_available())
torch.mps: True
@zhenhuaplan
Could you run the following Python script and let me know the output?
import torch print("torch.backends.mps:", torch.backends.mps.is_available()) print("torch.mps:", torch.mps.is_available())On my environment, the output is as follows:
>>> print("torch.backends.mps:", torch.backends.mps.is_available()) torch.backends.mps: True >>> print("torch.mps:", torch.mps.is_available()) torch.mps: True
@kiyokiku I am now using torch==2.4.0 instead of torch==2.6.0, and now I can use mps, torch.backends.mps.is_available() results in True, torch.mps.is_available() results in an error, This is because torch has no mps attribute
All in all, I can now use the GPU for acceleration, thanks
For compatibility, I have replaced torch.mps with torch.backends.mps.