RecallatK_surrogate icon indicating copy to clipboard operation
RecallatK_surrogate copied to clipboard

Code for GLDV1 experiments

Open elias-ramzi opened this issue 2 years ago • 4 comments

Hi,

Could you share the changes needed to run the GLDV1 experiments of your paper (Table 3) with this repo ?

  • hyper-parameters;
  • image shape;
  • inference protocol;
  • etc.

Thanks!

elias-ramzi avatar Feb 03 '23 15:02 elias-ramzi

Hi Elias,

for these experiments we relied on the following package: https://github.com/filipradenovic/cnnimageretrieval-pytorch/

The piece of code performing training on GLD is not publicly available at the moment. If there is interest, we will plan to update that repository and integrate it. It will take some time though.

gtolias avatar Feb 08 '23 21:02 gtolias

Hi,

Thanks for the response.

Would it be possible to share the training details? Are those the same than for SfM-120k in the repo you have linked?

e.g.:

--optimizer 'adam' --lr 5e-7 --image-size 362

Thanks!

elias-ramzi avatar Feb 09 '23 09:02 elias-ramzi

--optimizer 'adam' --lr 0.0001 --image-size 1024 batch size is 4096, 4 images per class in the batch, classes with less images are skipped, trained for 500 batches

gtolias avatar Feb 13 '23 07:02 gtolias

Sorry for the late response.

Thank you for your response.

I have one last questions about the fc layer added at the end of the ResNet-101: could you share the script used for the whitening initaliazation?

I have writen the following code to get the whitening, it is a transcription of sklearn's PCA with whitening:

from typing import Optional, Tuple

import torch
import torch.nn as nn
from torch import Tensor

NoneType = type(None)


def pca(
    features: Tensor,
    n_principal_components: Optional[int] = None,
    on_cpu: bool = False,
) -> Tuple[Tensor]:
    if n_principal_components is None:
        n_principal_components = features.size(1)

    assert n_principal_components <= features.size(1)

    if on_cpu:
        features = features.cpu()

    mean = features.mean(dim=0)
    n_samples = features.size(0)
    X = features - mean
    U, S, V = torch.linalg.svd(X.float(), full_matrices=False)
    max_abs_cols = U.abs().argmax(0)
    signs = torch.sign(U[max_abs_cols, range(U.size(1))])
    U *= signs
    V *= signs.unsqueeze(1)

    components_ = V[:n_principal_components]
    explained_variance_ = ((S ** 2) / (n_samples - 1))[:n_principal_components]

    return mean, components_, explained_variance_


def create_pca_layer(mean: Tensor, components_: Tensor, explained_variance_: Tensor, whiten: bool = True) -> nn.Module:

    weight = components_
    bias = - mean @ components_.T

    if whiten:
        exvar = torch.sqrt(explained_variance_)
        weight /= exvar.unsqueeze(1)
        bias /= exvar

    pca_layer = nn.Linear(components_.size(1), components_.size(0))
    pca_layer.weight.data = weight.cpu()
    pca_layer.bias.data = bias.cpu()

    return pca_layer

Thanks!

elias-ramzi avatar Mar 03 '23 10:03 elias-ramzi