Jaedong Hwang
Jaedong Hwang
@zetn @CaoYichao This problem occurs when you use same flag more than once at same time such as `python --train --input_height 28--input_height 32` I recommend you see your shell script...
@morantumi I am confused with the parameters too. I think default parameters on `run_grbal.py` is quite different from those in Appendix E. However, some parameters as I mentioned above is...
@morantumi Thank you for sharing your finding. However, `adapt_batch_size` is a window size for a previous history, See [here](https://github.com/iclavera/learning_to_adapt/blob/bd7d99ba402521c96631e7d09714128f549db0f1/learning_to_adapt/samplers/sampler.py#L81) Also, `valid_split_ratio` seems to be used for validation.
Hi @MinamiKotoka, Thank you for notifying me. I used torch-1.5 and torchvision-0.6. Could you please check it on that version? Does your coda is matched with torch and cudatoolkit (such...
@MinamiKotoka Could you give me a full log please?
Hi @IVparagigm, 1-1) Could you tell me which script did you run? on my side, the length of `frame_indices` is 32. 1-2) Also, I am not the author of the...
@IVparagigm 1) If I remember correctly, that one is from https://github.com/kenshohara/3D-ResNets-PyTorch/blob/master/datasets/videodataset.py and it is for dense-sampling. it finally samples 32 frames on the next stages. 2) what is `model` in...
Hi, I've never encountered that issue. what if just limiting the maximum number of exemplars at each time although box although I do not think that that is the primary...
Hi @HeimingX, I apologize for facing the difficulty. However, I am extremely busy these days. I will resolve this as soon as possible.
Hi @HeimingX, I apologize for the delay. I haven't tested yet and I do not think I have time before the upcoming CVPR deadline... However, it seems that the problem...