Dmytro Mishkin
Dmytro Mishkin
Hi, Check the example in this notebook https://colab.research.google.com/drive/1wO0TX0iJSpQsqfYT8g_d7J0pYyMwhuNe?usp=sharing
Any updates on this?
``` >>>>>> Image: img1.jpg Database does not exist. Could not create the database. Undefined function or variable 'digraph'. Error in CidCache (line 24) this.G = digraph; Error in get_rgns (line...
OK, it looks like that the problem is in my matlab version. I tried a newer one and got a different error :( ``` Invalid MEX-file '/mnt/home/mishkdmy/dev/single-view-autocalib/features/helpers/edgeSubPix/edgeSubPix.mexa64': /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27'...
Yes, β and γ. In caffe BatchNorm is split into batchnorm layer and learnable affine params layer.
> On Imagenet, @ducha-aiki found the opposite effect from the CIFAR results above. Putting batch normalization after the residual layer seems to improve results on Imagenet. That is not correct,...
Yes.
> commonly-held assumption that batch norm before ReLU is better than after. I never understand this from original paper, because sense of data whitening is normalization of layer input, and...
@sarlinpe thank you! I am surprised at ALIKED results tbh. Regarding DoG in kornia - to be honest, while I see some value in it, and would like to finally...
One can go even faster, but here the final pertaining quality is 1pp lower, so don't recommend