flair icon indicating copy to clipboard operation
flair copied to clipboard

Mac GPU Utilization Support

Open josebarross opened this issue 2 years ago • 2 comments

Torch has just released MAC M1 support using mps device. I want to know if flair will support it. I tried setting the flair.device manually to mps but it failed when running. Thank you in advance

josebarross avatar May 24 '22 19:05 josebarross

Thanks for raising this issue @josebarross!

@whoisjones can you check if this works?

alanakbik avatar May 26 '22 07:05 alanakbik

Hi everyone,

not sure if (still) relevant. I tried setting the device, unsuccessfully, via .to():

# setting the device
def get_torch_device():
    if torch.cuda.is_available():
        device = torch.device("cuda:0")
    elif torch.backends.mps.is_available():
        device = torch.device("mps")
    else:
        device = torch.device("cpu")
    return device


device = get_torch_device()
nlp = SequenceTagger.load("flair/ner-german-legal").to(device)
sentence = Sentence(text, use_tokenizer=False)
sentence.to(device=device)
nlp.predict(sentence)

Find the traceback below:

Traceback (most recent call last):
  File "/Users/jurica/Projects/data-processing-pipeline/models/annotators/legal_ner.py", line 71, in <module>
    spans = model("Arbeitsbericht Nr. 188 des Büros für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB): Strukturwandel und Nachhaltigkeit in der Landwirtschaft, Nachhaltigkeitsbewertung vom landwirtschaftlichen Betrieb bis zum Agrarsektor &ndash; Stand und Perspektiven, Vergleich von konventioneller und ökologischer Landwirtschaft, Handlungsoptionen")
  File "/Users/jurica/Projects/data-processing-pipeline/models/annotators/legal_ner.py", line 53, in __call__
    self.nlp.predict(sentence)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/flair/models/sequence_tagger_model.py", line 479, in predict
    features, gold_labels = self.forward(batch)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/flair/models/sequence_tagger_model.py", line 282, in forward
    self.embeddings.embed(sentences)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/flair/embeddings/token.py", line 68, in embed
    embedding.embed(sentences)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/flair/embeddings/base.py", line 62, in embed
    self._add_embeddings_internal(data_points)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/flair/embeddings/token.py", line 728, in _add_embeddings_internal
    all_hidden_states_in_lm = self.lm.get_representation(
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/flair/models/language_model.py", line 155, in get_representation
    _, rnn_output, hidden = self.forward(batch, hidden)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/flair/models/language_model.py", line 75, in forward
    encoded = self.encoder(input)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 159, in forward
    return F.embedding(
  File "/Users/jurica/miniconda3/envs/dpp/lib/python3.9/site-packages/torch/nn/functional.py", line 2197, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Placeholder storage has not been allocated on MPS device!

This seems to have already been fixed though

Relevant env info:

torch==1.13.0.dev20220630 # (but should work with 1.12 as well) flair==0.11.3

Hope it helps!

deakkon avatar Jul 01 '22 13:07 deakkon

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Nov 01 '22 15:11 stale[bot]

I've run into the same issue as @deakkon. This is still a problem.

gojefferson avatar Mar 27 '23 20:03 gojefferson

@gojefferson just tried to figure this out--with the merge of #3350 I am able to leverage torch with m1 by setting

import flair
import torch

mps_device = torch.device("mps")
flair.device = 'mps:0'

then run your code like normal

from flair.data import Sentence
from flair.nn import Classifier

# uses apple GPU
tagger = Classifier.load('ner')

sentence = Sentence("Hello there!")

tagger.predict(sentence)

it works as-is!

mileszim avatar Nov 03 '23 18:11 mileszim