kinetics_i3d_pytorch icon indicating copy to clipboard operation
kinetics_i3d_pytorch copied to clipboard

Inflated i3d network with inception backbone, weights transfered from tensorflow

Results 12 kinetics_i3d_pytorch issues
Sort by recently updated
recently updated
newest added

Hi @hassony2, First of all thanks for the repo. I wanted to use the pretrained kinetics RGB model to extract features from a dataset I created. Since my application should...

Great work @hassony2 . It will be really good if the model weights are moved to some storage and provide a script to download the weights. So that I can...

fix the bug "size mismatch",bug description: https://github.com/hassony2/kinetics_i3d_pytorch/issues/20

The Module of MaxPool3dTFPadding with kernel_size=(1,3,3), stride(1,2,2) can lead to asymmetrical padding. It would influence the output feature map, as the bottom right would be usually higher than other part...

It seems kinetics-600 retrained-model here[kinetics-i3d](https://github.com/deepmind/kinetics-i3d/tree/master/data/checkpoints/rgb_scratch_kin600) is the same as kinetics-400, but i meet error. Not found error: Key RGB/inception_i3d/Conv3d_1a_7x7/batch_norm/beta not found in checkpoint.

Hi Yana! Do you have some details on the your pre-processing for the optical flow? I've tried using your model with my own pre-processing that worked with the original TensorFlow...

Hi, I want to ask how can i use this code to extract features from my own video datasets. Your input of your code is .npy. However, how can i...

Hi, Thank you for your work, firstly. I want to transfer the pre-training parameters in Tensorflow to PyTorch. But when I run "python i3d_tf_to_pt.py --rgb", I have the bugs as...

Hi Thanks for your wonderful code. I have a question regarding smaller videos. Since in I3D model, it seems to use 64 frames as an input to the model, how...