infinity icon indicating copy to clipboard operation
infinity copied to clipboard

Tensor-parallelism for multi-gpu support

Open SalomonKisters opened this issue 1 year ago • 3 comments

Feature request

Being able to split models into multiple GPUs, as with vllm/aphrodite engine for LLMs.

Motivation

It would be extremely helpful to be able to split larger models into multiple GPUs. Also, without TP, one GPU loses lots of vram and the other does not, making it impossible to use tensor parallelism on another program at the same time. (Without losing as much VRAM on the non-utilized GPU)

Your contribution

communicating the feature

SalomonKisters avatar Apr 29 '24 21:04 SalomonKisters

You typically do data-parallel style inference on sentence-transformers. TP is used when one GPU can't handle the desired batch size or the model at all. Unless there are some compelling benchmarks for bert-base, there is no need for tensor parallelism.

michaelfeil avatar May 14 '24 16:05 michaelfeil