Multi-Task-Learning-PyTorch
Multi-Task-Learning-PyTorch copied to clipboard
Cityscapes depth estimation
Could you please support with the following questions:
- In the survey paper (Revisiting Multi-Task Learning in the Deep Learning Era), it is mentioned that depth maps of cityscapes were generated using SGM. Would it be possible to provide the code for this ?
- Is the depth map generated or the disparity ?
- Disparity maps are made available by cityscapes ? Are these used ? In this case, the networks predict directly predict depth or they predict disparity which is then converted to depth.?
Thank you.
Hello
I used the disparity maps provided by citysapes. You can download these from the official website. Note that the cityscapes experiment is no longer covered in the updated (and published) version of our paper: https://ieeexplore.ieee.org/document/9336293 The updated version provides a more unified comparison between both architectures / optimization techniques.
Would it be possible to provide the cityscapes dataloader?
I am trying to reproduce the cityscapes results and I would like to check how the depth estimation task is setup. Specifically,
- Are the networks trained to predict disparity or depth ?
- How was the values processed ? How was the disparity values brought to the 0 to 1 range ?
- Is the evaluation done on depth maps by considering the baseline and focal length of cityscapes ? Were any particular range of depth values ignored ?
- Also, was the same eval_depth code used for cityscapes as well ?
Thank you.