speaker_embedding-pytorch icon indicating copy to clipboard operation
speaker_embedding-pytorch copied to clipboard

request for help

Open torki-hossein opened this issue 4 years ago • 0 comments

Hi, I would like to write a loss function that is based on hard negatives only, for example assume that we have NxK (N: batch size, K:feature vector dimensions) feature vector on the output of a specific deep neural network and we have a function that can extract hard negatives for each class that is not trainable and the extraction process occurs just based on the data. After the extraction process, we have a CxK feature vector (C: Number of classes). assume the contrastive loss with just hard negatives. 1-how the PyTorch can compute gradients due to the hiding effect of hard negative mining? 2-How is the share of each vector in the batch calculated? 3-Can I extract a hard negative outside of the network (before or inside the loss function)?

torki-hossein avatar May 15 '21 07:05 torki-hossein