I3D_Finetune
I3D_Finetune copied to clipboard
fc_out, relu, top_k_op
https://github.com/USTC-Video-Understanding/I3D_Finetune/blob/master/Demo_Transfer_rgb.py#L139
Hi,
I think the last fc layer may should not use an activation unit.
If the output of fc layer is all negative, after the ReLU unit, it would be all zero.
In this case, top_k_op = tf.nn.in_top_k(fc_out, label_holder, 1) will always return True.
I think you are right. When I use this code to train , the result will be all the same sometimes. You can just remove parameter of activation.
Hi @WuJunhui ,
Thanks for your helpful advice, we have fix this issue in latest version of this repo. Please run git pull
to download the newest code.