Ruth Fong
Ruth Fong
Hi, I just started watching your repo, as I'm keen on using it myself. re: ignoring the background class, you can weight all the other classes except the background one...
Hm, when looking at your segnet implementation, I noticed that you aren't saving the maxpool indices but instead concatenating the earlier encoding activations. Here's an example of using the maxpool...
For unpooling, they don't use a concatenation, they use a "pool mask" (aka indices from the corresponding encoding pool layer [https://github.com/alexgkendall/SegNet-Tutorial/blob/master/Models/segnet_train.prototxt#L1443]). It does a similar thing to concatenating but uses...
I quickly added batch normalization and max pooling indices here, so definitely still WIP code (https://github.com/ruthcfong/piwise/blob/master/piwise/network.py#L298). I haven't been able to get significantly better performance. Using just batch normalization seems...
Yes, I ran 100+ epochs on both (without indices seems to work better). I'm not sure what you mean by "prior-prior", but I think I'm passing the right indices to...
From looking at the training section, a few ideas to try: - Local Contrast Normalization for the input - Using Median Frequency Balancing (https://arxiv.org/pdf/1411.4734v4.pdf) to determine class weights for loss:...
I'm running into the same problem, but not just for odd shaped output layers. Using pytorch, `blob="features.3"` in vgg19 (relu before first pool layer) has an activation shape of `(224,...