Tobias Ploetz
Tobias Ploetz
We also observed instability of the training (also without using the N3 block) in later stages of the training when training on the St. Peters dataset. Since accuracy on the...
Hi @sundw2014, I will look into this shortly. For the time being, here are the training curves that we got on StPeters:  _Fig. 1: training loss on St. Peters...
Hi @sundw2014, just a quick update on this issue. I ran some experiments and here is what I found: 1) Running the code on Cuda 9 + GTX 1080 or...
Hi @sundw2014, I think I found the culprit that causes the unstable training. The original implementation of the classification loss contains [this line](https://github.com/vcg-uvic/learned-correspondence-release/blob/82adffad3f6fa3ea52aba9919cef00955c6d1617/network.py#L186). `classif_losses = -tf.log(tf.nn.sigmoid(c * self.logits))` This results...
Hi @tqyunwuxin, currently we only support GPU computation for `N3Aggregation2D ` since im2col currently has no CPU implementation. It's probably not that hard to use some numpy functions to implement...
Hi, yes, `results_poissongaussian_denoising/pretrained` is the model that reproduces the DND benchmark results. > I can't run evaluation on DND dataset on one 1080Ti even with TC. By the way, when...
Hi @LemonPi , I think this is a problem of symmetry. In a nutshell, our continuous relaxation of hard kNN selection is not good in breaking ties. If you take...
Hi @LemonPi , there was an issue with numerical stability in when computing `log(1 - exp(x))`. This should be fixed with the latest update. I modified your example a bit...
I think this is related to Pytorch's implementation of log_softmax, which seemingly does not work correct if the maximal value of the argument has a large absolute value and appears...
Hi @12dmodel, I just updated the code with a much more memory efficient implementation of some operations that caused memory bottlenecks so far. Bests, Tobias