kinetics-i3d
kinetics-i3d copied to clipboard
train on my own dataset
Hello! I trained I3D model on my own dataset, 2 classes, each about 50 videos, the two classes are similar, like open the door/ close the door, after 40 epochs, train_accuracy is 90+%,but the val accuracy is just 50%, the model didn't learn anything useful ! How could I do?
Hi,
are you using a pre-trained model and just finetuning it on this data, right ? That will have a higher chance of working. Even if you are, the most likely way to make progress is to experiment with data augmentation, since you do not have a lot of data. Try left-right flipping, randomly reversing temporally the videos together with changing the label from "opening" to "closing", randomly slowing down or accelerating the video, etc, during training.
Best,
Joao
On Wed, Jan 16, 2019 at 2:54 AM 95xueqian [email protected] wrote:
Hello! I trained I3D model on my own dataset, 2 classes, each about 50 videos, the two classes are similar, like open the door/ close the door, after 40 epochs, train_accuracy is 90+%,but the val accuracy is just 50%, the model didn't learn anything useful ! How could I do?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/45, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qamALyJjGmKE7LDbHR78z1LGYMjQaks5vDpRvgaJpZM4aCNpr .
@joaoluiscarreira Thanks for your reply! I used the rib_scratch pre_trained model, and I have tried left-right flipping, randomly reversing temporally the videos, but it seemed useless. I will try randomly slowing down or accelerating the video. What I want to know most is whether this model is valid on my data set. Would it be better if I added data?
Adding data always helps, but indeed, such fine-grained temporal differences may benefit from a model pre-trained on a task other than kinetics -- there these distinctions are not well captured by the existing classes. You could look at the AVA dataset and use the open and close classes as additional training data -- https://research.google.com/ava/explore.html.
Best,
Joao
On Thu, Jan 17, 2019 at 2:50 AM 95xueqian [email protected] wrote:
@joaoluiscarreira https://github.com/joaoluiscarreira Thanks for your reply! I used the rib_scratch pre_trained model, and I have tried left-right flipping, randomly reversing temporally the videos, but it seemed useless. I will try randomly slowing down or accelerating the video. What I want to know most is whether this model is valid on my data set. Would it be better if I added data?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/45#issuecomment-455023097, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qahEuFBU4dfYoKtuUkLmE3yhSKbIfks5vD-TxgaJpZM4aCNpr .
how can i train my own dataset. Can anyone suggest me ? Thanks in advance.
Hello! I trained I3D model on my own dataset, 2 classes, each about 50 videos, the two classes are similar, like open the door/ close the door, after 40 epochs, train_accuracy is 90+%,but the val accuracy is just 50%, the model didn't learn anything useful ! How could I do?
Hi, could you tell me how to train this model on my own dataset? Thanks.
Hi, Same question. Can you please tell me how you trained your model for two classes @95xueqian
@95xueqian Hello. May you please share your code to train the model on 2 classes?
Looks like a dead thread. Anyone here was able to train the model on 2 classes? @punitagrawal32 @aman-gupta1510 @xyy304519983 @95xueqian @Shanmugavadivelugopal
谢谢!收到!
@95xueqian Same question.Hello. May you please share your code to train the model?
谢谢!收到!
谢谢!收到!
Adding data always helps, but indeed, such fine-grained temporal differences may benefit from a model pre-trained on a task other than kinetics -- there these distinctions are not well captured by the existing classes. You could look at the AVA dataset and use the open and close classes as additional training data -- https://research.google.com/ava/explore.html. Best, Joao … On Thu, Jan 17, 2019 at 2:50 AM 95xueqian @.***> wrote: @joaoluiscarreira https://github.com/joaoluiscarreira Thanks for your reply! I used the rib_scratch pre_trained model, and I have tried left-right flipping, randomly reversing temporally the videos, but it seemed useless. I will try randomly slowing down or accelerating the video. What I want to know most is whether this model is valid on my data set. Would it be better if I added data? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#45 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qahEuFBU4dfYoKtuUkLmE3yhSKbIfks5vD-TxgaJpZM4aCNpr .
Hi, did you get the codes for training on your own dataset? I am trying to do the same thing. Would you mind to share the codes with me? Many thanks!
谢谢!收到!
how can i train my own dataset. Can anyone suggest me ? Thanks in advance.
@Shanmugavadivelugopal Hi, did you get the codes for training on your own dataset? I am trying to do the same thing. Would you mind to share the codes with me? Many thanks!