Dataset-maker-for-action-recognition icon indicating copy to clipboard operation
Dataset-maker-for-action-recognition copied to clipboard

generate action data using openpose or SSD

Dataset-maker-for-action-recognition

----------------------Using openpose---------------------------------

Attention:There can be only one person in the field of vision.

Get human joint information using openpose, in this demo I use TensorFlow implementation

This implementation is trained with COCO, there are 18 joints of a person.

Nose = 0
Neck = 1
RShoulder = 2
RElbow = 3
RWrist = 4
LShoulder = 5
LElbow = 6
LWrist = 7
RHip = 8
RKnee = 9
RAnkle = 10
LHip = 11
LKnee = 12
LAnkle = 13
REye = 14
LEye = 15
REar = 16
LEar = 17

run

./pose/models/pretrained/mobilenet_v1_0.75_224_2017_06_14/download.sh

python run_cam.py

Press 's' to save joint information and joint images during running, press 'q' to quit.

The default camera resolution is 640x480, the format of saved joint is t_x_y, where 't' indicates the number of joint, 'x' indicates the horizontal location of joint on the image, 'y' indicates the vertical location of joint on the image.

----------------------Using SSD----------------------------------

Attention:There can be only one person in the field of vision.

#Using SSD to detect person then save person crop image.

run

python dataset_maker.py

Press 's' to save frames.

samples: