FlagEmbedding icon indicating copy to clipboard operation
FlagEmbedding copied to clipboard

Unable to load on multiple GPUs using HuggingFace Transformers

Open mohammad-yousuf opened this issue 1 year ago • 1 comments

When I try to load on multiple GPUS, I get the following error:

tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-base') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-base', device_map='auto')

error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

mohammad-yousuf avatar May 12 '24 21:05 mohammad-yousuf

Hi, @mohammad-yousuf , you need to move the data to the GPU device before passing it to the model:

device = torch.device('cuda')
encoded_input = encoded_input.to(device)

staoxiao avatar May 13 '24 02:05 staoxiao