Mu Hu
Mu Hu
> Hi Assran, thank you for your greatTTTTTTTTT work! I wonder if using a longer pre-training schedule (800/1200/1600 epochs), how much performance superiority can we get upon previous methods like...
I successfully re-implemented the performance on NYU but failed on Scannet. I have not yet found out the reasons.
> 你好: 我看文章里OREPA和RepVGG结合时,是直接在conv_3_3上加OREPA, 而不是直接将conv_3_3/conv_1_1/identity三个分支换成OREPA的形式。请问这样做的是因为直接将conv_3_3/conv_1*1/identity三个分支换成OREPA进行训练效果不好么? 是的,这样并不是很优雅
> Hello, I'm very confused about accuracy of ResNet34. Specifically, I train ResNet34 many time, but accuracy of ResNet34 is about 74.40. I found that this paper and RepVGG both...
> Hi, thanks for your great work! > > I tried to reproduce the visualization of branch-level similarity of OREPA blocks, but the unexpected results emerged. > > Could you...
We have posted it in the supplementary materials. (On an a100 gpu.) 
> I wondered what camera-related parameters matter most when using the `vit` models and if there are any guidelines to set those parameters. What do you mean by camera-related parameters?...
You need to change nothing but the focal length, which should align with the real case.
> I changed the intrinsic accordingly to match it to my custom image: https://github.com/YvanYin/Metric3D/blob/main/mono/utils/do_test.py#L258. Is this the only change in the code to get the metric depth on a custom...
一周左右?这个模型看上去很小,但在a100上跑起来跟large速度差不多,而且只在室外数据集上训过,泛化性一般