Model not getting trained on single GPU
When I try to train on single GPU, the error keeps on increasing and I cannot see any good results even thill 38th epoch.
train_class_error starts from 97.88 and deom 19th to 37th epoch its consistently 100. Can you debug this?
Please let me know if you need some more information
We train the model for 150 epochs. 38th epoch might be just warm-up. Maybe you can try to load some pretrained weights to accelerate the training?
@aryanmangal769 bro! How to train the model on One GPU?
@me I add os.environ['MASTER_PORT'] = '8889' in main.py
It is not related to the port. Make --nproc_per_node=1 pls
I trained 70 epochs but the results are still bad, including the errors and the loss. The loss is always like 33, 34..., is it normal or something goes wrong?
It is not related to the port. Make --nproc_per_node=1 pls
I set --nproc_per_node=1, but I am still getting the error torch.distributed.elastic.multiprocessing.errors.ChildFailedError. How can I resolve this issue?thank your reply
It is not related to the port. Make --nproc_per_node=1 pls
I set --nproc_per_node=1, but I am still getting the error torch.distributed.elastic.multiprocessing.errors.ChildFailedError. How can I resolve this issue?thank your reply
Maybe you should update the torch version, torch 1.6+cuda10.1 doesn't support the latest graphics cards
Maybe you should update the torch version, torch 1.6+cuda10.1 doesn't support the latest graphics cards
Thank you very much for your reply! I can run it now!
I trained 70 epochs but the results are still bad, including the errors and the loss. The loss is always like 33, 34..., is it normal or something goes wrong?
Have you solved this problem yet? I have trained 124epoch using a single GPU , but the results are still bad.I'm wondering if it's because the batchsize is too small.
