Dmytro Mishkin
Dmytro Mishkin
@YJonmo I believe you can finetune the depth-version with your epipolar data.
I guess that batched stops for the depth levels for the whole batch, which sometimes is suboptiomal.
@BoyceL you can train them probably very fast, but I would not expect ORB being good enough here. Also, if you are resource limited for training, check here, I have...
First, because detections are not well spread. Run 500 orb points 500 superpoints and visualize. Second, because descriptor is much worse and less robust to SIFT
In fact, I quickly tried ORB training, it doesn’t train with default difficulty on homography pretraining, you have to make it easier at least
@fettahyildizz if you don't have a resources for SIFT or GPU-SIFT, I don't see how you can have resources for transformer-based matcher.
Then train it. I have tried a couple of times, but both fail, because it requires extensive hyper-parameter tuning, compared to training DoG-HardNet or DeDoDe, or any other thing I...
>I searched a bit but found noone trained ORB, so I was wondering in my mind why noone trained this before? Is it because of hard-work it requires, or it...
@udit7395 your version 2 is correct: >Ran Superpoint for batch_image size individually in a for loop and tried creating a dictionary What you also should do, is to reduce threshold...
@AliaksandrSiarohin I don't suggest that now kornia implementation would work, I think that your results would hold. I just want to improve our library, especially given that backprop through classical...