llm2vec icon indicating copy to clipboard operation
llm2vec copied to clipboard

Inference on multiple GPUs

Open fa1c4 opened this issue 7 months ago • 0 comments

Hi, authors. I want to inference text embeddings on multiple GPUs here. I program my code from demo: """ import torch from llm2vec import LLM2Vec

l2v = LLM2Vec.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", peft_model_name_or_path="McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised", device_map="cuda" if torch.cuda.is_available() else "cpu", torch_dtype=torch.bfloat16, )

instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries)

documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents)

q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))

print(cos_sim) """

And run CUDA_VISIBLE_DEVICES=0,1,2,3 python inference.py but can only inference with one GPU, how can I run the inference script on multiple GPUs?

fa1c4 avatar Jul 26 '24 13:07 fa1c4