taskgrouping icon indicating copy to clipboard operation
taskgrouping copied to clipboard

How can we reproduce the results in the tables of your paper?

Open shun-zheng opened this issue 3 years ago • 4 comments

Thanks for such a great paper.

I cannot reproduce the relative performance gains shown in the tables of your paper with "results_xxx.txt" files in https://github.com/tstandley/taskgrouping/tree/master/network_selection.

I wonder whether it is because I do not use the single-task loss obtained on 1/2-SNT networks. Do you release those losses?

shun-zheng avatar Jan 18 '22 11:01 shun-zheng

The losses for the 1/2-snt networks is the first line of the results_xxx.txt file, one per task.

This is probably not the most elegant way of doing it, but it's what I did.

tstandley avatar Jan 24 '22 20:01 tstandley

Thanks for the reply.

However, I still cannot reproduce your results in Tables 2, 5, or 6.

For example. Table 5 dentoes the setting of using a higher-capacity encoder, which corresponds to 'results_largexxx.txt'. But I do not observe any performance gains that reach to 20% - 70% for the Edges task in Table 5.

Could you please tell me how to reproduce these tables?

shun-zheng avatar Jan 25 '22 03:01 shun-zheng

For example, the loss for training edges alone with a 1/2 SNT network is 0.02023. The loss for training it with normals is 0.01179, and 0.02023/0.01179=1.7159. So that's where the 71.59% comes from.

Make sure you're using the test-set results: https://github.com/tstandley/taskgrouping/blob/master/network_selection/results_large_test.txt

tstandley avatar Jan 25 '22 04:01 tstandley

Got it. Thanks.

I thought the precentage denotes the proportion of loss reduction, which should be (0.02023 - 0.01179) / 0.02023. While you used the percentage of loss increment when comparing single-task learning with multi-task learning.

shun-zheng avatar Jan 25 '22 05:01 shun-zheng