qlib icon indicating copy to clipboard operation
qlib copied to clipboard

macos mps device support in benchmarks

Open donaldkuck opened this issue 1 month ago • 1 comments

🌟 Feature Description

macos mps device support in benchmarks

Motivation

It would be great to add support for macOS's MPS device. By default, PyTorch models use CUDA (if available) or CPU, which isn’t very user-friendly for macOS devices with MPS acceleration. The current device configuration code is:

self.device = "cuda:%s" % (GPU) if torch.cuda.is_available() and GPU >= 0 else "cpu"

To enable MPS support, we could modify the code as follows:

USE_CUDA = torch.cuda.is_available() and GPU >= 0
USE_MPS = torch.backends.mps.is_available()
self.device = torch.device(f'cuda:{GPU}' if USE_CUDA else ('mps' if USE_MPS else 'cpu'))

Alternatively, should we let users configure the device directly via a YAML parameter for more flexibility?

donaldkuck avatar Nov 13 '25 08:11 donaldkuck

Hi, @donaldkuck

Thanks a lot for raising this — you’re absolutely right.

Adding proper MPS support would significantly improve the experience for macOS users, and your suggested change to the device selection logic makes perfect sense.

We fully agree that this is a worthwhile improvement. If you’re interested, we’d be very happy to see a pull request from you to introduce MPS support.

Thanks again for the great suggestion!

SunsetWolf avatar Nov 14 '25 08:11 SunsetWolf