presidio
presidio copied to clipboard
Transformers backend, device and dtype
Hello,
I cannot find how to specify the device to run on and the dtype of the model to use.
Is there a way to configure this?
Thanks
Presidio uses spacy-huggingface-pipelines
which in turn wraps a call to huggingface transformers.
There's currently isn't a simple way (that I can think of) to change the device without changing the code of either Presidio or spacy-huggingface-pipelines. If you have any suggestions for improvement, we'd be happy to review and/or discuss. Another option would be to use the sample for transformers instead of the TransformersNlpEngine
, as the sample allows you to change any call to transformers you'd like, and update device
and torch_dtype
.
I made a PR to fix the incorrect use of devices https://github.com/explosion/spacy-huggingface-pipelines/pull/23
Then setting the device is done with spacy:
import spacy
spacy.require_gpu()
As for dtype, I used torch.set_default_dtype(torch.float16)