Semantic-Segmentation-Suite
Semantic-Segmentation-Suite copied to clipboard
Training small custom dataset
I created a small dataset of 1500 images with 2 classes. These images consists of a moving cloth with 5 different textures and very little background. I want to segment the cloth from the background, so I generated the label images.
Here is the problem, no matter which network I choose to train, after some time training the loss starts to grow. See Figure attached. I also tried to overfit the data, but when taking weights at minimal loss the results are not bad, but not good enough.
I want to ask to more experienced people. Do you think this behavior is normal? What suggestions do you have to improve my training. (I tried different learning rates, decay rate and also Adam optimizer with no luck)
Thank you.
It's normal behavior when the model is overfitting. (https://www.dataquest.io/blog/learning-curves-machine-learning/)
If you split the dataset with training, validation and test set, you can apply early stopping technique. https://page.mi.fu-berlin.de/prechelt/Biblio/stop_tricks1997.pdf
To avoid overfitting, you should increase the number of data by data augmentation, decrease the learning rate or try with small models.
Thank you, I will check the links and try your suggestions.
In your case for data augmentation try: --h_flip True --brightness 0.2 Also rotation ( --rotation 45 ) can be used but it's tricky sometimes
Thank you for the suggestion. I tried --h_flip and --brightness 0.2 but didn't help much, just delayed few epochs the increase of loss. --rotation didn't work in my case. I guess best options are increase dataset with new data and use smaller model.
@Jordi Hey, I have the same question with you. My data just have 966 photos with 2 class. Have you solved the problem or find a smaller model to work?
As other users said, the network was overfitting. My conclusion was that I had too little data for such big network models.
Also same issue here, i have been trying different setup with no luck. I had a small dataset (187) with binary class and tried the results seems ok, i started increasing the dataset (390) and end up in almost similar curve.
I created a small dataset of 1500 images with 2 classes. These images consists of a moving cloth with 5 different textures and very little background. I want to segment the cloth from the background, so I generated the label images.
Here is the problem, no matter which network I choose to train, after some time training the loss starts to grow. See Figure attached. I also tried to overfit the data, but when taking weights at minimal loss the results are not bad, but not good enough.
I want to ask to more experienced people. Do you think this behavior is normal? What suggestions do you have to improve my training. (I tried different learning rates, decay rate and also Adam optimizer with no luck)
Thank you.
hi I just know about the semantic segmentaation and I am a fresh.And you could be an alpha ,so I wonder if it is possible for me to learn form you ?