FCRN_pytorch
FCRN_pytorch copied to clipboard
Not getting good results after training on own dataset
Hello @dontLoveBugs Actually I have prepared my own data set of indoor scene in my environment and want to train model on that. I am freezing all other layers except for the up projection blocks and the result is not so good. Even I trained it on as small data set as 600 images and achieved 82 percent accuracy but the results were not good visually. I donot know the reason of that maybe you can suggest me something. And the images I want to train are approximately 6k. The pretrained weights with NYU are even performing better. batch_size = 32 learning_rate = 1.0e-3 monentum = 0.9 weight_decay = 0.0005 num_epochs = 70 optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=learning_rate, momentum=monentum, weight_decay=weight_decay) and lr is halved after 10 epochs.
Validation depth image
rgb image
Maybe your dataset is too small. If you want to train your model on a new indoor scene dataset, I think to finetune the pretrained model of NYU dataset in your indoor scene dataset is a feasible approach. I don't know your "accuracy" means what, "rml", "rmse" or "pixel accuracy"? Besides, the depth range of your test image seems not large, which may cause the visualization result not obvious.