mmskeleton icon indicating copy to clipboard operation
mmskeleton copied to clipboard

How to make action recognition result visualization

Open SKBL5694 opened this issue 3 years ago • 41 comments

How to generate action recognition gif like the demo in ''mmskeleton/demo/recognition

SKBL5694 avatar Mar 04 '21 02:03 SKBL5694

Bro did you figure that out? I am still trying on how to generate one.

MaarufB avatar Mar 12 '21 12:03 MaarufB

Bro did you figure that out? I am still trying on how to generate one.

Yes, I've nailed it. But I've finished my work this week. If you're interested in it, maybe I can share you my experience on it during my working time.

SKBL5694 avatar Mar 13 '21 01:03 SKBL5694

Yes bro I'm so interested. I'd been trying look for that source, I would be thankful if you could show me how. 'Till now still working on it but nothings work.

MaarufB avatar Mar 13 '21 03:03 MaarufB

Yes bro I'm so interested. I'd been trying look for that source, I would be thankful if you could show me how. 'Till now still working on it but nothings work.

I'm glad to help you about that if I can. But I'm not in my work place now. I think I can give you some help two days later. Because the whole process is a bit complicated. So I want to know if you understand Chinese, or if you use WeChat or other communication software that can communicate in real time, if so, it will make our communication more convenient.

SKBL5694 avatar Mar 13 '21 03:03 SKBL5694

@SKBL5694 hello bro. Did you use openpose for action recoginiton using mmskeleton?

MaarufB avatar Mar 16 '21 08:03 MaarufB

@SKBL5694 hello bro. Did you use openpose for action recoginiton using mmskeleton?

I do use openpose. Actually, I use the previous version of mmskeleton called st-gcn to visualize the result.

SKBL5694 avatar Mar 16 '21 08:03 SKBL5694

Is it possible to use the current mmskeleton to visualize the video result for action recognition? And also the pose_demo for mmskeleton doesn't include the action recognition.

MaarufB avatar Mar 16 '21 08:03 MaarufB

Is it possible to use the current mmskeleton to visualize the video result for action recognition? And also the pose_demo for mmskeleton doesn't include the action recognition.

Good question. Based on my understanding of this framework, I probably say no. Because in the old version(st-gcn), there is a off-the-shelf script which supports train,test,visualization(both real-time and offline). If you use mmskeleton, you can not use this script.

SKBL5694 avatar Mar 16 '21 09:03 SKBL5694

Ohh I seee. Thanks for saving my time bro. Can I use the st_gcn under the mmskeleton repo "deprecated/origin_stgcn_repo" ? . Thanks once again bro.

MaarufB avatar Mar 16 '21 09:03 MaarufB

Ohh I seee. Thanks for saving my time bro. Can I use the st_gcn under the mmskeleton repo "deprecated/origin_stgcn_repo" ? . Thanks once again bro.

I'm not sure about that. I do the visualization by this [https://github.com/yysijie/st-gcn].

SKBL5694 avatar Mar 16 '21 09:03 SKBL5694

Thanks for the info bro. You really help me a lot.

MaarufB avatar Mar 16 '21 09:03 MaarufB

Thanks for the info bro. You really help me a lot.

My pleasure.

SKBL5694 avatar Mar 16 '21 09:03 SKBL5694

Bro, How about the training of own dataset?

MaarufB avatar Mar 17 '21 07:03 MaarufB

Bro, How about the training of own dataset?

Two tips. First, you should know st-gcn use two kinds of dataset: kinetics and NTU-RGB-D. The former is 18 joints per person with three dimensions of coordinates :(x,y,c) c is the confidence of the openpose output. And the latter has 25 joints per person with three dimensions which the third coordinate is z instead of confidence. Interestingly, both of them can be acceptted to the net. Though I'm a little confused about this operation, the paper also mentions it in 3.2, paragraph 2 and 3. Second, so you can choose any format(18 joints with confidence or 25 joints with z) you like to build you own dataset. You can refer to these 2 files NTU dataset and Kinetics dataset to build your own dataset.

SKBL5694 avatar Mar 17 '21 08:03 SKBL5694

Sorry for the late response bro. May I know how did you build your own dataset? I was just following the given procedure on how to build own dataset but something's not right when i ran the training script.

MaarufB avatar Mar 17 '21 08:03 MaarufB

Sorry for the late response bro. May I know how did you build your own dataset? I was just following the given procedure on how to build own dataset but something's not right when i ran the training script.

Emm... I don't know about 'the given procedure on how to build own dataset'. For me, I use openpose to extract joints data from my own videos(have extracted to frames). Then I write a script to change the openpose output to an binary file to feed the net. It's a hard and painful work, I'm working on this script too.

SKBL5694 avatar Mar 17 '21 09:03 SKBL5694

Actually I followed this one https://github.com/open-mmlab/mmskeleton/blob/master/doc/CUSTOM_DATASET.md. seems not workable doesn't prompt the training loss

MaarufB avatar Mar 17 '21 09:03 MaarufB

Actually I followed this one https://github.com/open-mmlab/mmskeleton/blob/master/doc/CUSTOM_DATASET.md. seems not workable doesn't prompt the training loss

Do you mean that you have generated own dataset's .json file like this? { "info": { "video_name": "skateboarding.mp4", "resolution": [340, 256], "num_frame": 300, "num_keypoints": 17, "keypoint_channels": ["x", "y", "score"], "version": "1.0" }, "annotations": [ { "frame_index": 0, "id": 0, "person_id": null, "keypoints": [[x, y, score], [x, y, score], ...] }, ... ], "category_id": 0, }

SKBL5694 avatar Mar 17 '21 09:03 SKBL5694

Yes bro.

MaarufB avatar Mar 17 '21 09:03 MaarufB

Yes bro.

And you are using mmskleton not st-gcn to train, am I right?

SKBL5694 avatar Mar 17 '21 09:03 SKBL5694

Yes bro, Sorry, I wanna try it and if does'nt work I'll use st-gcn.

MaarufB avatar Mar 17 '21 09:03 MaarufB

Yes bro, Sorry, I wanna try it and if does'nt work I'll use st-gcn.

Relax,bro, I didn't mean to blame you. I am not familiar with mmskeleton, because I want to visualize my result and make a real-time recognition program.But mmskeleton don't support those operations. So if you have these questions, I suggest you to search 'loss' in the issues. I remember during my searching for answers of my problem, I've seen similar questions as yours(loss do not decrease). If you really want to train with mmskeleton and make visualization #291 may be helpful.(though I can not follow those steps successfully)

SKBL5694 avatar Mar 17 '21 09:03 SKBL5694

Okay bro thanks. I really got stuck of this and wanna try your tips later I think. Thanks bro

MaarufB avatar Mar 17 '21 09:03 MaarufB

Bro, How about the training of own dataset?

Two tips. First, you should know st-gcn use two kinds of dataset: kinetics and NTU-RGB-D. The former is 18 joints per person with three dimensions of coordinates :(x,y,c) c is the confidence of the openpose output. And the latter has 25 joints per person with three dimensions which the third coordinate is z instead of confidence. Interestingly, both of them can be acceptted to the net. Though I'm a little confused about this operation, the paper also mentions it in 3.2, paragraph 2 and 3. Second, so you can choose any format(18 joints with confidence or 25 joints with z) you like to build you own dataset. You can refer to these 2 files NTU dataset and Kinetics dataset to build your own dataset.

Hi bro, I have installed the st-gcn and I can run the demo code, but I want to use the NTU-RGB-D model to run the demo,I get
the following errors: Traceback (most recent call last): File "main.py", line 31, in p = Processor(sys.argv[2:]) File "/home/weilong/st-gcn/processor/io.py", line 28, in init self.load_weights() File "/home/weilong/st-gcn/processor/io.py", line 75, in load_weights self.arg.ignore_weights) File "/usr/local/lib/python3.6/dist-packages/torchlight-1.0-py3.6.egg/torchlight/io.py", line 89, in load_weights

File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1045, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Model: size mismatch for A: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for data_bn.weight: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for data_bn.bias: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for data_bn.running_mean: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for data_bn.running_var: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for edge_importance.0: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.1: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.2: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.3: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.4: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.5: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]).

do you know what's the problem? 我也是一个中国学生,如果方便的请加一下我微信13735698625,谢谢!

zren2 avatar Mar 18 '21 17:03 zren2

Bro, How about the training of own dataset?

Two tips. First, you should know st-gcn use two kinds of dataset: kinetics and NTU-RGB-D. The former is 18 joints per person with three dimensions of coordinates :(x,y,c) c is the confidence of the openpose output. And the latter has 25 joints per person with three dimensions which the third coordinate is z instead of confidence. Interestingly, both of them can be acceptted to the net. Though I'm a little confused about this operation, the paper also mentions it in 3.2, paragraph 2 and 3. Second, so you can choose any format(18 joints with confidence or 25 joints with z) you like to build you own dataset. You can refer to these 2 files NTU dataset and Kinetics dataset to build your own dataset.

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

ChalsonLee avatar May 17 '21 13:05 ChalsonLee

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

SKBL5694 avatar May 17 '21 13:05 SKBL5694

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

yes, when I training st-gcn on kinects dataset.

ChalsonLee avatar May 17 '21 14:05 ChalsonLee

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

yes, when I training st-gcn on kinects dataset. Maybe this will be helpful. https://github.com/yysijie/st-gcn/blob/221c0e152054b8da593774c0d483e59befdb9061/processor/demo_offline.py#L143-L147

SKBL5694 avatar May 17 '21 14:05 SKBL5694

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

yes, when I training st-gcn on kinects dataset. Maybe this will be helpful. https://github.com/yysijie/st-gcn/blob/221c0e152054b8da593774c0d483e59befdb9061/processor/demo_offline.py#L143-L147

thanks a lot

ChalsonLee avatar May 17 '21 14:05 ChalsonLee

thanks a lot

:)

SKBL5694 avatar May 17 '21 14:05 SKBL5694