Multi-Task-Learning-PyTorch icon indicating copy to clipboard operation
Multi-Task-Learning-PyTorch copied to clipboard

[Question] About dataset condition for MTL

Open qjadud1994 opened this issue 2 years ago • 3 comments

Thank you for your awesome work. Your work might be greatly helpful to all people who interest in the MTL.

I'm about to study MTL and have one question.

I think the dataset for MTL should be in form of {Input: X(i), GT: Y_task1(i), Y_task2(i), ..., Y_taskT(i)}.

However, I think that it is difficult to satisfy this condition in a real-world environment. When we should train task-specific datasets D_task1 {Input: X_task1, GT: Y_task1}, D_task2 {Input: X_task2, GT: Y_task2} simultaneously, how we do MTL?

For example, we aim to set MTL for both salient object detection and depth estimation. For the salient object detection task, we use saliency labels from Pascal VOC dataset. For the depth estimation task, we use depth-map labels from NYUD dataset. (Both datasets totally consist of different input images, and Pascal VOC does not contain depth-map labels and NYUD does not contain saliency labels)

In this condition, how we construct MTL? Does anyone know about MTL for task-specific datasets or related works?

qjadud1994 avatar Jul 09 '21 08:07 qjadud1994

you can read the Ubernet: https://arxiv.org/pdf/1609.02132.pdf

wyfeng1020 avatar Sep 08 '21 07:09 wyfeng1020

I also want to ask how you can get the label of salient task in Pascal Dataset. The Pascal-S Dataset only has 850 images, but your dataset about sal has 10500 images. Thank you.

BQT11 avatar Sep 22 '21 06:09 BQT11

你可以阅读 Ubernet:https ://arxiv.org/pdf/1609.02132.pdf

Ubernet is not open source, and its training strategy is that there is no assignment of gt to 0 for loss. In fact, different data sets are trained separately and sequentially, right?

yuan243212790 avatar Jun 06 '23 07:06 yuan243212790