6dof-graspnet icon indicating copy to clipboard operation
6dof-graspnet copied to clipboard

Performance of evaluator

Open tasbolat1 opened this issue 4 years ago • 0 comments

Hi @arsalan-mousavian, I have made some tests on both uploaded models (old version, i call it v1 and ACRONYM) for evaluator model. In my test, i wanted to know how good the evaluator perform for on detecting positive grasps. Using json files within splits folder, I decided evaluate train/test accuracy for positive grasps on box and cylinder categories. For each model/split I run tests three times because data loader has stochastic behavior, thus i provide standard deviation too. The results is shown below:

split Model box accuracy cylinder accuracy
train v1 0.82 (0.22) 0.82 (0.21)
test v1 0.74 (0.24) 0.72 (0.27)
train ACRONYM 0.66 (0.28) 0.66 (0.27)
test ACRONYM 0.54 (0.25) 0.54 (0.25)

Generally I found these results to be poor taking into account 0.5 random choice:

  1. There are substantial overfit between train and test splits
  2. ACRONYM is performing worse than v1 model

Can you please confirm that this results are in line with models? May be i am doing something wrong? Thanks in advance

tasbolat1 avatar Jul 09 '21 08:07 tasbolat1