Volumetric-Aggregation-Transformer
Volumetric-Aggregation-Transformer copied to clipboard
Training time and logs
Hi, Seokju-Cho
Good work! This is a quite surprising job for few-shot segmentation.
I am trying to re-train VAT on PASCAL, but I find that it may take about 5 days on 4 Tesla V100 GPUs for 300 epochs.
- I wonder how long did it use in your experiments (with 3090 or other GPU)?
- And could you share a copy of the training log of the experiment for reference?
Thanks a lot!
Hello Jarvis,
I am also trying to train on pascal dataset. I am getting an error and not able to enter the for loop. Here is the code:- for idx, batch in enumerate(dataloader): # 1. Hypercorrelation Squeeze Networks forward pass print("idx and batch are ========= ", idx,"\t", batch) batch = utils.to_cuda(batch) logit_mask = model(batch['query_img'], batch['support_imgs'].squeeze(1), batch['support_masks'].squeeze(1)) pred_mask = logit_mask.argmax(dim=1)
Here, I am not able to enter the for loop.
How did you solve it? any solution is appreciated.