DASTM
DASTM copied to clipboard
About test accuracy
Hello, thanks for your excellent work and the elegant code released! Is the accuracy reported in the paper the validation accuracy? I re-run the code and found that the validation accuracy is almost identical to the paper, but the test set accuracy is somewhat lower.
Hi, Mao: We report the test result instead of the validation.
The result I got after training is about 70% on the test set.
Are the referred results in the first column of Table 1? Nearly all the performances are better than 70%. on 5-way-1-shot results on NTU-T. Have you checked other factors such as the environment, dataset, and devices? I suggest you download the dataset and keep the same environment, and try it again.
We adopted the dataset and the code for training the full model you provided in Readme:
python train.py --SA 1 --reg 0.1
We trained twice, the results are 70.3 and 71.9, respectively.
By the way, can different versions of PyTorch bring a performance gap of ~5%? I suggest you release your training log on your environment to clarify this!
Please tell us the used dataset. We have masked the data path in this repo.
The datasets have been updated at 4-th Oct.
My results on 5-way-1way on NTU-T is 72% on STGCN, I adopted the code: python train.py --SA 1 --reg 0.1, but the result is much lower 75.1% in the paper, why?
As reported by the paper, we found that the seeds can bring a large evaluation variance. Please use multiple seeds and observe the average performances. for example, use seed=2022/2021/2020.
Thank you for your reply, I will have a try.