EfficientDet.Pytorch icon indicating copy to clipboard operation
EfficientDet.Pytorch copied to clipboard

Where is the constrcit of Flops?

Open qtw1998 opened this issue 5 years ago • 10 comments

EfficientNet can be constricted by FLOPS you know

qtw1998 avatar Dec 30 '19 06:12 qtw1998

image Maybe need pretrained weights?

qtw1998 avatar Dec 30 '19 08:12 qtw1998

@qtw1998 No, you need create folder 'weights' or add `os.makedirs(directory)' to create folder contains 'weights'. You do not need to pretrained the model first. Sorry, I have not added this in the code yet How do you fix out memory in centos?. Please let me know??

toandaominh1997 avatar Dec 30 '19 08:12 toandaominh1997

@qtw1998 No, you need create folder 'weights' or add `os.makedirs(directory)' to create folder contains 'weights'. You do not need to pretrained the model first. Sorry, I have not added this in the code yet How do you fix out memory in centos?. Please let me know??

image

I'm using 8 x 2080ti to train the model now. Maybe there are some bugs in the code, like some memory leakings?

qtw1998 avatar Dec 30 '19 09:12 qtw1998

@qtw1998 I see it only takes ~3000 MB per GPU, you try increasing the batch_size?

toandaominh1997 avatar Dec 30 '19 09:12 toandaominh1997

@qtw1998 Can you train COCO2017 and let me know mAP. I haven't had time to train this dataset now

toandaominh1997 avatar Dec 30 '19 09:12 toandaominh1997

@qtw1998 Can you train COCO2017 and let me know mAP. I haven't had time to train this dataset now

Would you please upload the bash scripts(it's about the COCO2014)?hhh THX!! And the first time I run train.py after finish running the bash scripts(Coco2014), there are some bugs you know, because u can see cocov2.py is all about the coco2017 solving.

qtw1998 avatar Dec 30 '19 10:12 qtw1998

@qtw1998 I see it only takes ~3000 MB per GPU, you try increasing the batch_size?

NO, I don't know why. If I use one less, will have the same problem I have posted yesterdayRuntimeError: CUDA error: out of memory

qtw1998 avatar Dec 30 '19 10:12 qtw1998

I updated sell script for download coco2017 at (https://github.com/toandaominh1997/EfficientDet.Pytorch/blob/master/datasets/scripts/COCO2017.sh). Please checkout last commit and follow readme for training coco2017. Thanks @qtw1998

toandaominh1997 avatar Dec 30 '19 10:12 toandaominh1997

Can you increase the batch size(+4 or +8 ...) to test whether the model for the percentage of gpu? On Mon, Dec 30, 2019 at 5:04 PM Qiao Tianwei [email protected] wrote:

@qtw1998 https://github.com/qtw1998 I see it only takes ~3000 MB per GPU, you try increasing the batch_size?

NO, I don't know why. If I use one less, will have the same problem I have posted yesterdayRuntimeError: CUDA error: out of memory

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/toandaominh1997/EfficientDet.Pytorch/issues/50?email_source=notifications&email_token=AEMMLCHDEGMOX2VAX62RMJTQ3HBSPA5CNFSM4KBJCBU2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHZ6NAA#issuecomment-569632384, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEMMLCH47EWEBT7POI2SL7TQ3HBSPANCNFSM4KBJCBUQ .

toandaominh1997 avatar Dec 30 '19 10:12 toandaominh1997

I updated sell script for download coco2017 at (https://github.com/toandaominh1997/EfficientDet.Pytorch/blob/master/datasets/scripts/COCO2017.sh). Please checkout last commit and follow readme for training coco2017. Thanks @qtw1998

Thx bro, and do you use any other instant social medium, maybe we coulda had a better conservation if you'd like, you can send it to my email:[email protected]. Then I can add you on Facebook, IG or even Wechat, etc... and we can have a comprehensive talk or maybe I need your halp! thx bro

qtw1998 avatar Dec 30 '19 10:12 qtw1998