Self-supervised-Monocular-Trained-Depth-Estimation-using-Self-attention-and-Discrete-Disparity-Volum icon indicating copy to clipboard operation
Self-supervised-Monocular-Trained-Depth-Estimation-using-Self-attention-and-Discrete-Disparity-Volum copied to clipboard

Reproduction of the CVPR 2020 paper - Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume

Results 7 Self-supervised-Monocular-Trained-Depth-Estimation-using-Self-attention-and-Discrete-Disparity-Volum issues
Sort by recently updated
recently updated
newest added

Hello, thanks for your outstanding research!! I have a question about the visualization of the attention map, the output of the context module channels size is 512, your paper indicated...

Thank you for this great reproduction! I wonder if you may release your trained model weights?

Thank you for this great work!! how can I get the resnet101 pre-train weight?

I am wanting to train a model using my GPU, it is a GeForce 3060, which is only compatible with CUDA 11 and above. If I use the required CUDA...

Thanks for your great work. When I read your submit report to ML Reproducibility Challenge 2020, I find one place which is greatly different from the author's results and I...

The code couldn't run as wish. I have run `sudo apt-get install ninja-build` on ubuntu18.04, but the error is as below: ``` Traceback (most recent call last): File "/home/dell/Codes/mono2++/train.py", line...

Hi! Thanks for your excellent work. will you release the pretrained model in the future work? Recently, i train my own model resnet101 w/ dilated convolution, using `self.encoder = resnets[num_layers](replace_stride_with_dilation=[False,...