Volumetric-Aggregation-Transformer icon indicating copy to clipboard operation
Volumetric-Aggregation-Transformer copied to clipboard

Training time and logs

Open Jarvis73 opened this issue 2 years ago • 1 comments

Hi, Seokju-Cho

Good work! This is a quite surprising job for few-shot segmentation.

I am trying to re-train VAT on PASCAL, but I find that it may take about 5 days on 4 Tesla V100 GPUs for 300 epochs.

  • I wonder how long did it use in your experiments (with 3090 or other GPU)?
  • And could you share a copy of the training log of the experiment for reference?

Thanks a lot!

Jarvis73 avatar Mar 14 '22 13:03 Jarvis73

Hello Jarvis,

I am also trying to train on pascal dataset. I am getting an error and not able to enter the for loop. Here is the code:- for idx, batch in enumerate(dataloader): # 1. Hypercorrelation Squeeze Networks forward pass print("idx and batch are ========= ", idx,"\t", batch) batch = utils.to_cuda(batch) logit_mask = model(batch['query_img'], batch['support_imgs'].squeeze(1), batch['support_masks'].squeeze(1)) pred_mask = logit_mask.argmax(dim=1)

Here, I am not able to enter the for loop.

How did you solve it? any solution is appreciated.

Anant4830 avatar Jul 15 '23 19:07 Anant4830