Yuecong Min
Yuecong Min
The accuracy of the last epoch.
I plot them based on matplotlib, like code [here](https://github.com/ycmin95/pointlstm-gesture-recognition-pytorch/blob/master/dataset/utils.py#L140) and remove some unnessary backgrounds in post-processing on emf exported from [inkscape](https://stackoverflow.com/questions/53195714/how-to-get-matplotlib-figures-in-emf-wmf-format).
Hi, thanks for your attention. As shown in the L35-36 of the [process code](https://github.com/ycmin95/pointlstm-gesture-recognition-pytorch/blob/master/dataset/shrec17_process.py#L35), the [0,4) correspond to the x,y,z,t; and the [4,7) correspond to the transformed x, y, z...
The main idea here is change the "self.stage4" module to PointLSTM module. You can compare the usage of them [here](https://github.com/ycmin95/pointlstm-gesture-recognition-pytorch/blob/master/experiments/models/motion.py#L66)[L66-L74 v.s. L76-L82] to implement this.
You can check whether the number of points and the channel dimension are consistent.