beir
beir copied to clipboard
Support Multi-node evaluation
You can try this PR using:
torchrun --nproc_per_node=2 examples/retrieval/evaluation/dense/evaluate_sbert_multi_gpu.py
Using e5-large model I got
- MSMARCO (8.84M documents) took 1h03min to encode (on 16 GPUs) -> evaluation took 1h04min
- NQ (2.68M documents) took 22min to encode (on 16 GPUs) -> evaluation took 25min
cc @thakur-nandan
Fixes https://github.com/beir-cellar/beir/issues/134