happy-transformer icon indicating copy to clipboard operation
happy-transformer copied to clipboard

M1 GPU accell support

Open ThatStella7922 opened this issue 2 years ago • 1 comments

this is available in PyTorch now and you can check via torch.backends.mps.is_available()

https://pytorch.org/docs/stable/notes/mps.html

you just use torch.device("mps") instead of torch.device("cuda") on an Nvidia GPU.

ThatStella7922 avatar Jul 11 '22 01:07 ThatStella7922

Thanks for the suggestion!

EricFillion avatar Jul 11 '22 20:07 EricFillion

I try to solve it by add codes work round like that,

        if torch.has_mps:
            self._device = 'mps'
            self.model.to(self._device)       

But when I install it, run it found python tell me model is on mps, but input_ids still in the CPU, so I change it, and change self._pipeline = TextGenerationPipeline(model=self.model, tokenizer=self.tokenizer, device=device_number) to self._pipeline = TextGenerationPipeline(model=self.model, tokenizer=self.tokenizer, device='mps').

Then python tell me

NotImplementedError: The operator 'aten::cumsum.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764.

tinyfool avatar Dec 12 '22 14:12 tinyfool

Now supported with version 3.0.0. MPS is automatically detected and used.

EricFillion avatar Aug 08 '23 03:08 EricFillion