Xuan Lin
Xuan Lin
@Rainy000 Try clip_gradient while initializing optimizer in train.py
@Rainy000 Sorry, I mistook deviation for derivative, so clip gradient is not a solution. Did you follow the ratio the author mentioned in the paper? I think raising the importance...
@Rainy000 You can test the celebA dataset using your Rnet model and compute normalized error of predicted landmarks, then choose those with big deviations to be the training set of...
@Rainy000 Exactly.
@hkdqliu That's right, but I had it transformed. The scripts are located in prepare_data/wider_annotations
core/negativemining.py, line 25 and line 42 find out the valid indexes for cls and bbox. In backward, only gradient on valid indexes are assigned 1 otherwise 0.
No, it's done in epoch end callback inside mod.fit()
@huynhthedang PNet is fully convolutional so there's no restriction on the input size
I trained 16 epochs so pnet-0016.params is the final model. The codes to init the params are in example/train.py