A. Jabri

Results 10 comments of A. Jabri

Hi @FraLuca, Thanks for the interest. The issue is that different example in your batch have different resolution. In my training code, I assume the videos have resolution 256 x...

Hi @icoz69 and @vadimkantorov, Sorry for missing your messages above. Hopefully this is no longer a block, but indeed `train_256` is a folder containing a resized (256x256) version of Kinetics....

Hi @AndyTang15, Thanks for your interest, and I apologize for the late reply. I haven't re-run the JHMDB and VIP evaluations since refactoring and retraining models for the code release,...

Hi @dmckee5, I have not yet reconciled this issue (the lower [email protected] with this repository). If you are reporting or comparing to our results, at this point, please go ahead...

Hi, thanks for your interest! Regarding the training with single feature map case: I think there is more than one way the network can find a shortcut solution. In general,...

Hi @annahadji, the algorithm used at inference time is a basic label propagation algorithm. More than one frame of context is provided for label propagation at each time step, meaning...

Hi @PkuRainBow, Thanks for your interest! Are you sure the NaN is a result of this line; i.e. have you tried training with `--dropout 0`? I chose to implement dropout...

Were you able to address this issue?

Hi @pansanity666 You're right that it's redundant because nn.CrossEntropy computes logsoftmax and then nll. I think I stored logprobs here because I had other losses before. You can just compute...

Hi @pansanity666 I believe `nn.CrossEntrop`y expects logits or log probabilities, so you will have to take the log. You can do the following instead: ``` logits = torch.log(A+EPS).flatten(0,-2) loss =...