mmskeleton
mmskeleton copied to clipboard
Improve accuracy
@yysijie hello, I have been learning GCN these days, as i have used st-gcn on hmdb51 dataset successfully, there seems to have another question, when i used the pretrain model on hmdb51 ,the accuracy about 60%, so i tried to improve it, as you know, the hmdb51 videos is shorter than your's dataset, i changed the input data's length,from 300 to150,the window_size=100,but it didn't work, I guess “is it because my data is too short ?or maybe there is another reason" and could you give me some advice. more to say, i think compared to some other method, GCN maybe could offer more privacy, so it is worth to learn more
Are you training on HMDB51?
@yjxiong yes,i used the kinetics model offered by the author, and finetuning it on hmdb51
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck!
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck! 多谢老铁。我用自己的数据集是10种行为,现在top1是10%准确率,top5是50%,说明几乎没有训练成功。你是怎么finetune的,我这边用他们提供的训练方式发现是从头开始训练,没有用到训练好的权重。
@fromlimbo 你的数据集中视频时常是多少,作者的是10秒左右,大概在这个范围的分类结果会好,再调学习率什么的吧,调参玄学,感觉没什么规律~~~
@fromlimbo 你的数据集中视频时常是多少,作者的是10秒左右,大概在这个范围的分类结果会好,再调学习率什么的吧,调参玄学,感觉没什么规律~~~
按照他的论文来的,本来是不到100帧,循环补全到300帧,10秒左右。然后我发现训练的时候weight bias是随机初始化的,没有用到pretrain的model。你这边训练用了pretrain的weight吗?
@fromlimbo 你的数据集中视频时常是多少,作者的是10秒左右,大概在这个范围的分类结果会好,再调学习率什么的吧,调参玄学,感觉没什么规律~~~
我这边从头训练的好了,top1准确率81.67%,top5 98%,应该还算可以了
@fromlimbo Could you tell me more information , I just used my own dataset with 10 classes, but the accuracy is very low. The size of the dataset is 330, and the data's length is 150. Please help me. 老铁,我的用自己的数据集精度也很低,你怎么从头训练的啊,我也发现他随机给weight bias。
@fromlimbo 我的准确率没有那么高,用的hmdb51的库.......
I just found that you only need to train more times. Now I got 82% for top1 and 100% for top5. And you can also use the better weight by adding weights in train.yaml 多训练几次结果就好了,也可以重复使用一个weight不停训练.
@yanqian123 @fromlimbo 我想请问通过openpose.py可以将多个帧的.json转换为video.json和我运行demo显示到/openpose_estimation/data/video.json的区别是什么?
@yanqian123 @fromlimbo 我想请问通过openpose.py可以将多个帧的.json转换为video.json和我运行demo显示到/openpose_estimation/data/video.json的区别是什么?
没啥区别
@huanglai666 你可以参考openpose.py里面的内容,st-gcn的demo分析视频的时候先调用了openpose来分析,然后使用了openpose的json里面的数据
Hi @yanqian123, I think our model is not so sensitive to the length of input sequences. Because of the different data distribution between two datasets, training from pretrained model is not always better than training from scratch.
@fromlimbo @yanqian123 please tell me What about the inference time?thanks!
I tried to use the “kinetic skeleton” data set to train the new model, but both Top1 and top5 are only over 40%. Why? I didn't make any changes to the "train.yaml" parameters. Why does this happen? @fromlimbo @Zakeiswo @yjxiong @YanYan0716 @eillenlun
@Zakeiswo I have a very similar problem, I trained st-gcn on my dataset and my accuracy is low. Can you please give more details on how you made it work better? I tried to translate your comments, it says that you trained it for longer, can you give details? how many epochs, the learning rate etc. and more importantly, you mention "weights" in train.yaml, can you let me know what you mean by that?? I also tried to switch to Adam and use a reduce learning rate when it reaches a plateau scheduler but it performed worse (low accuracy, but converged faster) for some reason. Thanks!
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck!
请问使用demo_old.py提取每个视频的骨架序列json文件的吗 为什么它提取出来的json文件中的frame_index是乱序的 ? 还有 学习率该怎么调比较合适
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck! 多谢老铁。我用自己的数据集是10种行为,现在top1是10%准确率,top5是50%,说明几乎没有训练成功。你是怎么finetune的,我这边用他们提供的训练方式发现是从头开始训练,没有用到训练好的权重。
兄弟 我想问问你是怎么获得数据集的骨架序列的 因为demo里那种方法一次只能处理一个视频 而且frame_index是乱序的 你是怎么得到骨架的json文件的? 还有 你后来训练是怎么调参的 我这边训练结果一直不太好
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck! 多谢老铁。我用自己的数据集是10种行为,现在top1是10%准确率,top5是50%,说明几乎没有训练成功。你是怎么finetune的,我这边用他们提供的训练方式发现是从头开始训练,没有用到训练好的权重。
兄弟 我想问问你是怎么获得数据集的骨架序列的 因为demo里那种方法一次只能处理一个视频 而且frame_index是乱序的 你是怎么得到骨架的json文件的? 还有 你后来训练是怎么调参的 我这边训练结果一直不太好
我这时间太久远忘了,我记得当时就是一步一步单步调试他们的demo,最后自己改了下,骨架json用的openpose拿到的,调参存试出来的我记得
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck! 多谢老铁。我用自己的数据集是10种行为,现在top1是10%准确率,top5是50%,说明几乎没有训练成功。你是怎么finetune的,我这边用他们提供的训练方式发现是从头开始训练,没有用到训练好的权重。
兄弟 我想问问你是怎么获得数据集的骨架序列的 因为demo里那种方法一次只能处理一个视频 而且frame_index是乱序的 你是怎么得到骨架的json文件的? 还有 你后来训练是怎么调参的 我这边训练结果一直不太好
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck! 多谢老铁。我用自己的数据集是10种行为,现在top1是10%准确率,top5是50%,说明几乎没有训练成功。你是怎么finetune的,我这边用他们提供的训练方式发现是从头开始训练,没有用到训练好的权重。
兄弟 我想问问你是怎么获得数据集的骨架序列的 因为demo里那种方法一次只能处理一个视频 而且frame_index是乱序的 你是怎么得到骨架的json文件的? 还有 你后来训练是怎么调参的 我这边训练结果一直不太好
我这时间太久远忘了,我记得当时就是一步一步单步调试他们的demo,最后自己改了下,骨架json用的openpose拿到的,调参存试出来的我记得
能加个qq吗..... 我最近刚开始搞这个
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck! 多谢老铁。我用自己的数据集是10种行为,现在top1是10%准确率,top5是50%,说明几乎没有训练成功。你是怎么finetune的,我这边用他们提供的训练方式发现是从头开始训练,没有用到训练好的权重。
兄弟 我想问问你是怎么获得数据集的骨架序列的 因为demo里那种方法一次只能处理一个视频 而且frame_index是乱序的 你是怎么得到骨架的json文件的? 还有 你后来训练是怎么调参的 我这边训练结果一直不太好
@yjxiong about hmdb51 dataset, train:val=4:1, and when i test the val, Top1=60%
hi,I was also testing my own dataset, so HOW did you transform you data meet the input of st-gcn,could you give me some detailed steps,thx a lot!
just use openpose ,then gain the json result .maybe you can see /tools/utils/openpose.py about how to use json results from openpose, good luck! 多谢老铁。我用自己的数据集是10种行为,现在top1是10%准确率,top5是50%,说明几乎没有训练成功。你是怎么finetune的,我这边用他们提供的训练方式发现是从头开始训练,没有用到训练好的权重。
兄弟 我想问问你是怎么获得数据集的骨架序列的 因为demo里那种方法一次只能处理一个视频 而且frame_index是乱序的 你是怎么得到骨架的json文件的? 还有 你后来训练是怎么调参的 我这边训练结果一直不太好
我这时间太久远忘了,我记得当时就是一步一步单步调试他们的demo,最后自己改了下,骨架json用的openpose拿到的,调参存试出来的我记得
能加个qq吗..... 我最近刚开始搞这个
我已经不做这个了,抱歉啊,这个模型应该算没啥前途了,看看别的新模型吧