torch
torch copied to clipboard
Non-cuda GPU support
Hello,
Is there a chance that lantern get compiled and provide support for running on non-nvidia GPU cards through PlaidML & OpenCL 1.2 or any alternative ?
This is not possible because LibTorch does not support other backends, although they seem to be working on a AMD Rocm device and support for new M1 macs.
I think this comment is a good reference in this subject: https://github.com/pytorch/pytorch/issues/47702#issuecomment-737682371
Ok, thanks for this refinement.
This is a reality now, so hopefully pulling in version 1.8 will open new doors.
Thanks @jaredlander for this callback
I am starting to update to libtorch 1.8 here: https://github.com/mlverse/torch/pull/513 and we will probably be able to use ROCm, however note that currently there are no binaries for LibTorch with ROCm support available for download in Pytorch's website, so users will probably need to build liobtorch from source..
What if we setup GitHub actions to build a bunch of different binaries?
Yeah, we could do that! I have a workflow here to build LibTorch https://github.com/dfalbel/declarations/blob/master/.github/workflows/main.yaml
We would need to figure out how to build it with ROCm support. Probably following something like: https://lernapparat.de/pytorch-rocm/
We’re excited to announce support for GPU-accelerated PyTorch training on Mac! Now you can take advantage of Apple silicon GPUs to perform ML workflows like prototyping and fine-tuning. Learn more: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
Some very impressive speedups shared in that post! I wonder if {torch} will benefit.
Support for M1 GPU's is now done :)
I don't plan adding support for AMD GPU's as I don't have hardware to try it. Happy to help if someone is interested in adding it though.
I'm your man for AMD GPU ! I'll open a dedicated issue for that !