nested-transformer icon indicating copy to clipboard operation
nested-transformer copied to clipboard

Model Converge Problem

Open khawar-islam opened this issue 2 years ago • 8 comments

I am training on a medium-scale dataset that consists of 100,000 images. The learning rate and weight decay as the same as your config but still not working. Any opinion?

Regards, Khawar Islam

khawar-islam avatar Jul 17 '21 12:07 khawar-islam

What do you mean by not working, totally disconverge? Or performance not good. It will be useful if you are provide more information.
Two suggestions:

  • It is better to do data diagnosis first.
  • Train a standard ResNet on your dataset, to see whether it is data issue or model issue.

zizhaozhang avatar Jul 18 '21 18:07 zizhaozhang

I tried 3 vision transformers and two vision transformers are working on the same dataset and coverage easily. Your NesT transformer is totally disconverge from the beginning and after 100 epochs, there is no improvement in accuracy and loss.

When I trained on another transformer on same it works fine.

khawar-islam avatar Jul 19 '21 00:07 khawar-islam

Did you train these methods from scratch or finetune with their pre-trained checkpoints. It will be important sometimes. Our scripts currently only train from scratch, but it can be easy for finetuning using our pre-trained models.

zizhaozhang avatar Jul 19 '21 06:07 zizhaozhang

Did you train these methods from scratch or finetune with their pre-trained checkpoints. It will be important sometimes. I am training from scratch

Our scripts currently only train from scratch, but it can be easy for finetuning using our pre-trained models. scratch

khawar-islam avatar Jul 19 '21 06:07 khawar-islam

Hi, from our experiments, we do not our methods have convergence issue. it will be great if you can provide more training detailed info so I can help, e.g. what is the others method you train and what is the setup (scripts), what is the number of devices. Otherwise, it is hard to diagnose.

zizhaozhang avatar Jul 19 '21 17:07 zizhaozhang

The network is kind of "sensitive". I used AdamW with learning rate decay and found it crashed when adjusting the learning rate. Note that I used the implementation of PyTorch in timm.

image

Euruson avatar Nov 17 '21 04:11 Euruson

The network is kind of "sensitive". I used AdamW with learning rate decay and found it crashed when adjusting the learning rate. Note that I used the implementation of PyTorch in timm.

image

From my training, this should be a rare occurrence. And it is recommended to use gradient clipping.

Freder-chen avatar Jan 27 '22 02:01 Freder-chen

Hi, From my experience with the architecture. It is very sensitive to warm-up epochs. When i used the implementation from timm with the warm up schedule of PyTorch-lightning it was diverging. But when i followed their warm up implementation it worked fine. The problem also happened with me once after when i did not use any augmentation by mistake.

abdohelmy avatar May 19 '22 17:05 abdohelmy