BLINK
BLINK copied to clipboard
Biencoder with GPU RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
Hi, thanks for your code! When I set the no_cuda
in biencoder_wiki_large.json
as True, then I run python blink/run_benchmark.py
, it returns an error, please see below, is there anything that I missed? Best regards, A.
Traceback (most recent call last):
File "blink/run_benchmark.py", line 81, in <module>
) = main_dense.run(args, logger, *models)
File "/home/username/BLINK/blink/main_dense.py", line 429, in run
biencoder, dataloader, candidate_encoding, top_k, faiss_indexer
File "/home/username/BLINK/blink/main_dense.py", line 251, in _run_biencoder
context_input, None, cand_encs=candidate_encoding # .to(device)
File "/home/username/BLINK/blink/biencoder/biencoder.py", line 160, in score_candidate
token_idx_ctxt, segment_idx_ctxt, mask_ctxt, None, None, None
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/BLINK/blink/biencoder/biencoder.py", line 63, in forward
token_idx_ctxt, segment_idx_ctxt, mask_ctxt
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/BLINK/blink/common/ranker_base.py", line 30, in forward
token_ids, segment_ids, attention_mask
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 707, in forward
embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 251, in forward
words_embeddings = self.word_embeddings(input_ids)
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
Originally posted by @acadTags in https://github.com/facebookresearch/BLINK/issues/83#issuecomment-1043093241
I have the same problem. Have you found the solution?
I have the same problem. Have you found the solution?
Not solved yet. Maybe inference with biencoder was just designed to run with CPU - it was around 1 hour with the pre-computed entity embeddings.
I found the answer in this pull request