Kevin Musgrave
Kevin Musgrave
### Describe the Bug In version 10.3.12, onMouseDown has no effect when used inside a custom node. It works in version 10.2.3 though. ### Your Example Website or App _No...
It's normal to try: ```python labels = torch.arange(32) x = lmu.get_all_pairs_indices(labels, ref_labels=labels) ``` This will result in 0 positive pairs because of the `labels is ref_labels` check. This isn't documented...
When ```efficient=True```: - All embeddings should be added to each rank's ```CrossBatchMemory.embedding_memory``` - Only the current rank's embeddings should be passed as the first argument to ```CrossBatchMemory.loss.forward()```
This would require changing the order of arguments to ```query, query_labels, reference, reference_labels```
The regularizer mixins are overly complicated: https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/src/pytorch_metric_learning/losses/mixins.py Also get rid of or simplify [```all_regularization_loss_names```](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/0caae7af403c637732856d6305b4f9111c187fc3/src/pytorch_metric_learning/losses/base_metric_loss_function.py#L55-L64)
This seemed like a good idea when used with record-keeper, but it complicates things if you're not using record-keeper
It can be useful to compute the average loss over the training set or validation set, and use that (rather than some accuracy metric) to select the best checkpoint. See...
As mentioned in this pull request: https://github.com/KevinMusgrave/pytorch-metric-learning/pull/424#issuecomment-1042868787