pytorch-metric-learning icon indicating copy to clipboard operation
pytorch-metric-learning copied to clipboard

The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

Results 97 pytorch-metric-learning issues
Sort by recently updated
recently updated
newest added

In version 1.2.0, Centroid Triplet Loss is unstable. When the batch size is small, the below error always occurs. If larger batch size, it is relatively be relieved. `File "/root/miniconda3/lib/python3.7/site-packages/pytorch_metric_learning/losses/base_metric_loss_function.py",...

bug

Hi, In `TripletMarginLoss` you have default margin set to **0.05**: ``` class TripletMarginLoss(BaseMetricLossFunction): """ Args: margin: The desired difference between the anchor-positive distance and the anchor-negative distance. swap: Use the...

enhancement

Hi everyone! What is the purpose of **embeddings_come_from_same_source** argument of [get_accuracy](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/#getting-accuracy) function? Let's consider a case in which query == index and I set this argument to **True**. Will it...

enhancement

Any plan to include example of this loss function? Thanks

enhancement

When ```efficient=True```: - All embeddings should be added to each rank's ```CrossBatchMemory.embedding_memory``` - Only the current rank's embeddings should be passed as the first argument to ```CrossBatchMemory.loss.forward()```

enhancement

This would require changing the order of arguments to ```query, query_labels, reference, reference_labels```

enhancement

The regularizer mixins are overly complicated: https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/src/pytorch_metric_learning/losses/mixins.py Also get rid of or simplify [```all_regularization_loss_names```](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/0caae7af403c637732856d6305b4f9111c187fc3/src/pytorch_metric_learning/losses/base_metric_loss_function.py#L55-L64)

enhancement

This seemed like a good idea when used with record-keeper, but it complicates things if you're not using record-keeper

enhancement

It can be useful to compute the average loss over the training set or validation set, and use that (rather than some accuracy metric) to select the best checkpoint. See...

enhancement

As mentioned in this pull request: https://github.com/KevinMusgrave/pytorch-metric-learning/pull/424#issuecomment-1042868787

enhancement