mi1ebsco
Results
2
comments of
mi1ebsco
Try lowering the batch size. Setting batch_size_gpl=16 might work but will take longer to run. The reason for this is the amount of GPU RAM consumed by each batch. 32...
Cross-encoder and sentence-transformer does not officially support multi-GPU training, though there are some (very old) forks that have been experimenting with this feature: https://github.com/UKPLab/sentence-transformers/pull/1215