mmyolo icon indicating copy to clipboard operation
mmyolo copied to clipboard

Slower Training Than Ultralytics

Open davidhuangal opened this issue 1 year ago • 2 comments

I have noticed that training is significantly slower with MMYOLO as opposed to Ultralytics using the same parameters and environment. I.e., using the same set of GPUs with the same batch size, both using AMP, both using distributed training, etc.

By significantly, I mean in the range of 3x-4x. Has anyone else run into this issue or figured out how to fix it? I have even tried using the cached mosaic augmentation and even simply removing the mosaic augmentation as the FAQ mentioned this could be a bottleneck and saw no significant increase in training speed.

davidhuangal avatar Jan 16 '24 18:01 davidhuangal

yes,i find the same problem

lianxintao avatar Feb 05 '24 06:02 lianxintao

@davidhuangal I think it is kind of trade off. the ultralytics implementation is very tighthley written there code is very hard to read and if you want to modify it good luck! point is that it's not a very modular, readable and in results not very customizable friendly. here it is very much customizable I used yolov5 as RPN in mask-RCNN from mmdet then also Resnet as backbone all those things are possible and we loose speed for flexibility, it's trade off.

it can be improved though but seems like this project is abandoned, there are no commits in main branch since 9 month.

PushpakBhoge avatar May 08 '24 17:05 PushpakBhoge