RelTR icon indicating copy to clipboard operation
RelTR copied to clipboard

Model not getting trained on single GPU

Open aryanmangal769 opened this issue 2 years ago • 16 comments

When I try to train on single GPU, the error keeps on increasing and I cannot see any good results even thill 38th epoch.

train_class_error starts from 97.88 and deom 19th to 37th epoch its consistently 100. Can you debug this?

Please let me know if you need some more information

aryanmangal769 avatar Aug 28 '23 19:08 aryanmangal769

We train the model for 150 epochs. 38th epoch might be just warm-up. Maybe you can try to load some pretrained weights to accelerate the training?

yrcong avatar Oct 20 '23 05:10 yrcong

@aryanmangal769 bro! How to train the model on One GPU?

qqxqqbot avatar Apr 22 '24 11:04 qqxqqbot

@me I add os.environ['MASTER_PORT'] = '8889' in main.py

qqxqqbot avatar Apr 22 '24 11:04 qqxqqbot

It is not related to the port. Make --nproc_per_node=1 pls

yrcong avatar Apr 22 '24 18:04 yrcong

I trained 70 epochs but the results are still bad, including the errors and the loss. The loss is always like 33, 34..., is it normal or something goes wrong? 微信图片_20240712155952

AlphaGoooo avatar Jul 12 '24 08:07 AlphaGoooo

It is not related to the port. Make --nproc_per_node=1 pls

I set --nproc_per_node=1, but I am still getting the error torch.distributed.elastic.multiprocessing.errors.ChildFailedError. How can I resolve this issue?thank your reply

wuzhiwei2001 avatar Sep 04 '24 09:09 wuzhiwei2001

It is not related to the port. Make --nproc_per_node=1 pls

I set --nproc_per_node=1, but I am still getting the error torch.distributed.elastic.multiprocessing.errors.ChildFailedError. How can I resolve this issue?thank your reply

Maybe you should update the torch version, torch 1.6+cuda10.1 doesn't support the latest graphics cards

A11en4z avatar Sep 05 '24 01:09 A11en4z

Maybe you should update the torch version, torch 1.6+cuda10.1 doesn't support the latest graphics cards

Thank you very much for your reply! I can run it now!

wuzhiwei2001 avatar Sep 05 '24 08:09 wuzhiwei2001

I trained 70 epochs but the results are still bad, including the errors and the loss. The loss is always like 33, 34..., is it normal or something goes wrong? 微信图片_20240712155952

Have you solved this problem yet? I have trained 124epoch using a single GPU , but the results are still bad.I'm wondering if it's because the batchsize is too small. image

A11en4z avatar Sep 06 '24 08:09 A11en4z