Duplicated process and memory leakage for evaluation process in all_gather
System Info
For all accelerate version
Information
- [ ] The official example scripts
- [X] My own modified scripts
Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported
no_trainerscript in theexamplesfolder of thetransformersrepo (such asrun_no_trainer_glue.py) - [X] My own task or dataset (give details below)
Reproduction
Related issue https://github.com/huggingface/transformers/issues/15466 https://github.com/huggingface/transformers/pull/28769/files
Expected behavior
https://github.com/huggingface/accelerate/blob/55136b8dc4a1f5bf8a33f38f25b279debdabcc00/src/accelerate/utils/operations.py#L353
All the accelerate gather function is stricted to all_gather. However, there are also the way of using gather in main process to calculate the evaluation process. If we use all_gather for the evaluation process and pass it to cpu it will cost n times (n is number of process). However we only require to gather the distributed variable to one place to calculate.
What do you think about this?
https://github.com/facebookresearch/detectron2/blob/ebe8b45437f86395352ab13402ba45b75b4d1ddb/detectron2/utils/comm.py#L188
https://github.com/huggingface/accelerate/issues/2898
@SangbumChoi definitely open to trying out something more efficient! Best case scenario we have a flag to use all_gather instead, and default to this new method as part of the func. Would you like to take a stab at a PR?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.