Kevin Musgrave

Results 141 comments of Kevin Musgrave

Yes that's correct, when `True` it will skip the first nearest neighbor. Maybe `skip_first_neighbor` would be clearer 😄

Starting in [v1.5.2](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v1.5.2), it does something more complicated than just skipping the first neighbor, so maybe a better argument name will be `query_in_ref`.

I'm guessing you have an embedding and a label for each pixel in the image. You can pass all of these embeddings to NTXentLoss: ```python from pytorch_metric_learning.losses import NTXentLoss loss_fn...

You need to reshape the embeddings to have shape (N, D), and labels to have shape (N,). Something like this might work, though I haven't confirmed that the reshaping of...

Hmm I see, because the batch size is huge (589000), that function isn't able to create the necessary matrices. I'll have to think about how to solve this large-batch problem....

Sorry, I haven't gotten around to this yet.

I've tried to make all the losses and miners compatible with mixed precision training, so you should be able to follow the [PyTorch example](https://pytorch.org/docs/stable/notes/amp_examples.html#typical-mixed-precision-training). You don't need to do anything...

Maybe this will work: ```python from pytorch_metric_learning.trainers import MetricLossOnly import torch class WithAutocastMetricLossOnly(MetricLossOnly): def calculate_loss(self, curr_batch): data, labels = curr_batch with torch.cuda.amp.autocast(): embeddings = self.compute_embeddings(data) indices_tuple = self.maybe_mine_embeddings(embeddings, labels) self.losses["metric_loss"]...