Zeeshan Khan Suri
Zeeshan Khan Suri
Would you please be kind enough to share your quantitative results for the ResNet18 backbone Depth model with velocity supervision? Thank you
https://github.com/TRI-ML/packnet-sfm/blob/2698f1fb27785275ef847f3dbbd550cf8fff1799/packnet_sfm/geometry/camera.py#L132-L138 How to interpret the output of the reconstruct function which lifts the depthmap onto 3D using inverse intrinsic matrix? I see that it outputs a ray of size [Bx3xwxh]....
Error when trying to run training ```bash File "/dinov2/dinov2/layers/patch_embed.py", line 75, in forward x = self.proj(x) # B C H W File "/dinov2/dinov2/models/vision_transformer.py", line 211, in prepare_tokens_with_masks x = self.patch_embed(x)...
I evaluate the pretrained dinov2 with different decoder heads on KITTI Eigen Split in order to replicate the paper's numbers. I found the results much worse. Here's what I did....
Changed the `RandomOrder `transform to use `torch.randperm` and thus make it Scriptable
Has anyone compared this library with [Pytorch3D](https://pytorch3d.readthedocs.io/en/latest/modules/transforms.html). I'm wondering what makes lietorch better? The accompanying paper compares with Pytorch though.
Congrats on the remarkable work. The repo mentions MIT License. Is the dataset also published under the same?
Dear authors, Great work. Please add license. Please also check for and respect licenses from which this repo was derived from ( pixelNeRF, monodepth2). Thanks