Mask_R_CNN_Keypoints icon indicating copy to clipboard operation
Mask_R_CNN_Keypoints copied to clipboard

training problem

Open nebuladream opened this issue 7 years ago • 8 comments

I run your inference phrase success, but when I try to finetune your model by AIChallenger data, it seems not right, after some epoch the keypoints disappear…… in your main.py, train heads layers which may not include mask_class_loss_graph, is this cause the issue? can you provide how to train the net?

nebuladream avatar Jan 11 '18 10:01 nebuladream

@nebuladream Actually I train the neural network with the code that is published, using mini_mask, if you use utils.minimize_mask_2 to cut and resize the keypoints, these should not "disappear", when a point is not present (missing) because is not in the picture the code assign (0, 0) = 1 you can check, the code is in:

image, image_meta, gt_bbox, gt_mask = modellib.load_image_gt(dataset_val, config, image_id, use_mini_mask=True, augment=False) buffer_mask[i] = utils.minimize_mask_2(bbox[i, :5].reshape([1, 5]), mask[i], config.MINI_MASK_SHAPE)

if m.sum() == 0: mini_mask[0, 0, i] = 1

actually I dont know way the performance is not as the paper

RodrigoGantier avatar Jan 11 '18 11:01 RodrigoGantier

I train the model from coco pretrained weights, finding mrcnn_mask_loss branch may not convergence……the loss like this , do you know why?

2018-01-16 12 19 15

nebuladream avatar Jan 16 '18 04:01 nebuladream

I've encountered the same problem. The kp mask branch gets really hard to converge. Did we miss something important in the config parameters? image

minizon avatar Mar 16 '18 14:03 minizon

@minizon @nebuladream Do you finally fix this issue? My loss can't converge too. It confused me a lot.

Superlee506 avatar Mar 20 '18 04:03 Superlee506

@minizon @minizon You can refer to my repository https://github.com/Superlee506/Mask_RCNN. I refereed to the original detectron project and modified the code with detailed comments. The loss converges quickly, but there is still much room for improvement.

Superlee506 avatar Mar 27 '18 21:03 Superlee506

@Superlee506 Thank you for sharing your code. I've realized that I've made a mistake on the horizontal flip augmentation. I did not change the kp left/right labels when I mirrored the persons. Since I remove this augmentation, both the training and val losses converge, though there's still a larger margin to the theoretical value compared to other branches' losses. By the way, your space encoding is more efficient in GPU memory use.

minizon avatar Mar 29 '18 01:03 minizon

@nebuladream Yes, I also notice this problem. Can your model distinguish the symmetrical lef/right keypoint? My model often predict these points together.

Superlee506 avatar Mar 29 '18 03:03 Superlee506

@Superlee506 , @minizon this can be an interesting read for this problem: http://blog.dlib.net/2018/01/correctly-mirroring-datasets.html

filipetrocadoferreira avatar Apr 02 '18 09:04 filipetrocadoferreira