Jean

Results 11 comments of Jean

@muzi2045 how to prepare 16-beam lidar?make it kitti like format?

@FrankEscobar did you try that? any rsults? o

@r03ert0 may I ask that waht is your mesh, if this is a keyboard, will this method be ok?

I have compare your papers (v1,v2,v3) carefully, and I have find that except the se-block , the others are almost the same, eg high resolution block, in your code, we...

@sglvladi I wish you can put some demo files or some tutorials in readme. thanks

@sglvladi well, really thanks, can you give some corresponding materials to your examples. for example, when it's for jpdaf, I hope you can give some papers that you refer to,...

@sglvladi hi, is your phd thesis done? looking forward to seeing it.

@Tangshitao 请问一下,在loftr-lite以及loftr两个模型里面,分别使用 QuadTree-A (ours, K = 8)、QuadTree-B (ours, K = 8)、QuadTree-B∗ (ours, K = 16)所采用的batch-size相比不采用quadtree的变换有多少,我这边使用QuadTree-B (ours, K = 8)发现batch-size比之前的还小了。这正常吗? 还有一个问题: QuadTree-B∗ (ours, K = 16),这个我看论文描述是 使用了VIT类似的transformer block,能描述一下吗?

1.我这边是跑了 QuadTree-B∗ (ours, K = 16)在3090上batch-size是8,而原始的loftr batch-size是10,发现batch-size还降低了,正常吗 2.QuadTree-B∗ 与QuadTree-B的实现区别就是VIT之间的不一样?能公开一下代码吗,想看一下区别

@Syencil @JasperKirk print (model) 之后是 “ ``` ) (m): Sequential( (0): Bottleneck( (cv1): Conv( (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU()...