mteb icon indicating copy to clipboard operation
mteb copied to clipboard

OOM Error when evaluating on ms-marco

Open Arist12 opened this issue 9 months ago • 2 comments

The following is my code that tries to run one of the models on ms-marco retrieval task.

from mteb import MTEB
from sentence_transformers import SentenceTransformer

model_name = 'intfloat/e5-base'
model = SentenceTransformer(model_name)
evaluation = MTEB(tasks=["MSMARCO"])
results = evaluation.run(model, output_folder=f"results/{model_name}", batch_size=16)

I totally follow the evaluation pipeline in readme and further set the batch_size to avoid potential OOM error, but still encounter it.

Batches: 100%|██████████| 31434/31434 [05:18<00:00, 98.73it/s] 
Batches: 100%|██████████| 3125/3125 [03:27<00:00, 15.07it/s]
Error while evaluating MSMARCO: CUDA out of memory. Tried to allocate 93.68 GiB. ...

Arist12 avatar May 15 '24 07:05 Arist12

Hello,

I don't have many solutions here, the dataset is big I think:

  • Either put a smaller batch size
  • Or you could try specifying the languages on which you'd like to evaluate. Although there's an issue currently related to this language selection, we'll fix it soon.

imenelydiaker avatar May 15 '24 10:05 imenelydiaker

A solution might be to ensure that the embeddings are offloaded to the CPU before moving on to the next batch:

from mteb import MTEB
from sentence_transformers import SentenceTransformer

class SentenceTransformerWithCPUOffloading(SentenceTransformer):
    def encode(sentences, **kwargs):
       batched_list = # split sentences into batches of desired size
       emb = []
       for sents in batched_list:
          res = self.super().encode(sents, **kwargs)
          emb.append(res.cpu().detach())
      
      emb = # concatenate embedding here 
      return emb

another solution might be to use the convert_to_numpy arguments

from mteb import MTEB
from sentence_transformers import SentenceTransformer

model_name = 'intfloat/e5-base'
model = SentenceTransformer(model_name)

model.encode = functools.partial(model.encode, convert_to_numpy=True)

KennethEnevoldsen avatar May 15 '24 11:05 KennethEnevoldsen

from mteb import MTEB
from sentence_transformers import SentenceTransformer

model_name = 'intfloat/e5-base'
model = SentenceTransformer(model_name)

model.encode = functools.partial(model.encode, convert_to_numpy=True)

Thank you for this advice.

I find that convert_to_numpy and convert_to_tensor both default to True, and when convert_to_tensor is set to True, convert_to_numpy will be set to False.

Therefore, it seems convert_to_tensor=False will work as desired 🥰

Arist12 avatar May 20 '24 08:05 Arist12