RNN-for-Human-Activity-Recognition-using-2D-Pose-Input icon indicating copy to clipboard operation
RNN-for-Human-Activity-Recognition-using-2D-Pose-Input copied to clipboard

Can you provide the extracted skeleton features ?

Open wxw420 opened this issue 7 years ago • 5 comments

Great project!

Can you provide the extracted skeleton features ?

DATASET_PATH = "data/HAR_pose_activities/database/"

thank you~

wxw420 avatar Jan 18 '18 10:01 wxw420

Thanks!

Yep absolutely, I've updated the data folder now, should have included them from the start sorry. I've only put up the .txt files of the 2D pose estimation output.

Btw this isn't the raw output from Openpose, it's just the x and y positions of the first person identified (it occasionally get's mixed up) and I've dropped the accuracy term.

I'll have a look at putting up a link to the raw output and actual frames later, but for now they're much too large. Hopefully the readme gives you an idea about their format.

Cheers

stuarteiffert avatar Jan 18 '18 22:01 stuarteiffert

Thanks!!

I'm trying you idea on the NUT RGB-D Action Recognition Dataset (https://github.com/shahroudy/NTURGB-D)

I have installed the Openpose lib,and get the raw output from Openpose:

%YAML:1.0 pose_0: !!opencv-nd-matrix sizes: [ 1, 18, 3 ] dt: f data: [ 5.35293162e-01, 3.12815875e-01, 9.14700806e-01, 5.30614555e-01, 3.56416523e-01, 9.51078296e-01, 5.09295881e-01, 3.59006494e-01, 8.91916811e-01, 4.98570561e-01, 4.24252331e-01, 9.28610802e-01, 5.30589283e-01, 4.29711968e-01, 9.46624637e-01, 5.44504344e-01, 3.56346011e-01, 9.31741714e-01, 5.50560713e-01, 3.99921060e-01, 7.31239915e-01, 5.55195034e-01, 4.18874562e-01, 9.04930174e-01, 5.18497407e-01, 4.84173954e-01, 8.74068022e-01, 5.18456042e-01, 5.73790371e-01, 9.36716795e-01, 5.18432796e-01, 6.58208013e-01, 8.89132738e-01, 5.42921841e-01, 4.84107494e-01, 8.67609262e-01, 5.38304031e-01, 5.73818147e-01, 8.32006633e-01, 5.32163203e-01, 6.47380590e-01, 8.65740597e-01, 5.30736446e-01, 3.04666549e-01, 8.97628725e-01, 5.41390955e-01, 3.04660797e-01, 9.18502986e-01, 5.19996166e-01, 3.12632948e-01, 8.67253661e-01, 5.44406831e-01, 3.12671602e-01, 2.54696369e-01 ]

can you provide the code that generate the .txt files of the 2D pose estimation output.

I really like your project,Thank you for the code and ideas!

wxw420 avatar Jan 19 '18 01:01 wxw420

No worries, I've put up the utility functions I used, with a bit of documentation to help you out. These scripts are pretty hobbled together, and reference local directories so I recommend having a look through and adapting them.

Also, when I ran Openpose, I outputted to .json format not yaml, so the formatting is likely different. An example of the .json format the script expects is:

{ "version":1.0, "people":[ { "pose_keypoints":[ 291.993,161.616,0.884794,306.363,207.243,0.880718,278.934,205.944,0.826647,272.427,254.137,0.805595,264.65,289.366,0.735341,334.963,209.85,0.801041,341.536,261.996,0.808842,338.887,304.976,0.869565,285.479,294.582,0.77781,288.086,358.459,0.848771,285.461,407.972,0.843053,319.326,295.841,0.772092,328.54,359.792,0.852925,337.619,420.985,0.870045,289.383,156.426,0.79702,298.527,158.97,0.860202,0,0,0,316.749,162.981,0.916965 ], "face_keypoints":[

		],
		"hand_left_keypoints":[
			
		],
		"hand_right_keypoints":[
			
		]
	}
]

}

stuarteiffert avatar Jan 19 '18 02:01 stuarteiffert

Heads up, I had to remove the dataset as it is now >100Mb (github's limit). I'll look at making it available elsewhere in future

stuarteiffert avatar Jan 30 '18 07:01 stuarteiffert

@wxw420 hello I was generating train data on my own dataset. I can save the keypoints as described: [ j0_x, j0_y, j1_x, j1_y , j2_x, j2_y, j3_x, j3_y, j4_x, j4_y, j5_x, j5_y, j6_x, j6_y, j7_x, j7_y, j8_x, j8_y, j9_x, j9_y, j10_x, j10_y, j11_x, j11_y, j12_x, j12_y, j13_x, j13_y, j14_x, j14_y, j15_x, j15_y, j16_x, j16_y, j17_x, j17_y ] Do you know how to convert it to X_train.txt and Y_train.txt? Thanks!

kli017 avatar Jan 25 '19 07:01 kli017