tensorflow-triplet-loss icon indicating copy to clipboard operation
tensorflow-triplet-loss copied to clipboard

pairwise_dist drops to 0,loss is near the margin and can't go down

Open xiaomingdaren123 opened this issue 5 years ago • 9 comments

hi,omoindrot I have encountered some problems,after training for a while,pairwise_dist drops to 0,loss is near the margin and can't go down,visualize the training set and discover that they are all together,I don't know what caused it. Learning_rate is 0.0001,The network structure is vgg16 and the output dimension is 128,Data augmentation is used because of the small amount of data(random crop and horizontal flip),This will not happen if I don't use data augmentation.I hope you can reply,thanks!

xiaomingdaren123 avatar Mar 16 '19 01:03 xiaomingdaren123

It looks like training is collapsing, so you may want to decrease your learning rate maybe?

Or maybe have bigger batches to stabilize training? You can also monitor the average distance between embeddings to see how the collapse happens (suddenly or gradually).

omoindrot avatar Mar 18 '19 17:03 omoindrot

I don't decrease the learning rate(learning rate is 0.001),if I decrease learning rate,training process become slow,batch_size is set to 96。Can triple loss be used directly for classification task? or the data set must be pre-trained by softmax loss?

xiaomingdaren123 avatar Mar 19 '19 14:03 xiaomingdaren123

Maybe pre-training with a softmax loss could help.

omoindrot avatar Mar 28 '19 16:03 omoindrot

Maybe pre-training with a softmax loss could help.

Hi Olivier,

I considered this approach to try learning some supervised representation from the data, then refining it with triplet learning. I have not been able to stabilise training an embedding solely using triplet loss.

Could you elaborate on utility of the pre-training approach that you suggest?

cyrusvahidi avatar Jul 14 '19 17:07 cyrusvahidi

The pretraining approach is just to get a good embedding with a softmax loss, since this loss is very stable and you should be able to converge.

Once you have this good enough representation, the triplet loss may help to separate further the class clusters and get you better performance.

omoindrot avatar Jul 15 '19 10:07 omoindrot

The pretraining approach is just to get a good embedding with a softmax loss, since this loss is very stable and you should be able to converge.

Once you have this good enough representation, the triplet loss may help to separate further the class clusters and get you better performance.

Is it also important to change the activation of the penultimate label vector layer to linear?

cyrusvahidi avatar Jul 15 '19 10:07 cyrusvahidi

Yes so you have two steps:

  1. Train with softmax loss. You have the network computing the embedding, then a linear layer with softmax activation
  2. Remove the linear layer. Train with triplet loss using the embedding only

omoindrot avatar Jul 15 '19 11:07 omoindrot

Ok thanks. I more wanted to ask if the embedding's activation should be linear instead of ReLu, which I have seen mentioned before

cyrusvahidi avatar Jul 15 '19 11:07 cyrusvahidi

It’s a good point, I think without relu makes more sense since you want the embedding to possibly have negative values.

omoindrot avatar Jul 15 '19 19:07 omoindrot