Javier Ribera
Javier Ribera
You are setting alpha=0, which means that the activations of the MLP are ignored, as described in the docstring of ELMClassifier: https://github.com/dclambert/Python-ELM/blob/master/elm.py#L357 `activation = alpha*mlp_activation + (1-alpha)*rbf_width*rbf_activation ` With alpha=1,...
It seems a pretty critical issue to describe why this implementation uses this particular model. Also none of those two papers seem to describe why the activations from both ELM-RBF...
This problem occurs because the linear layers are created during training depending on the size of the input training images (see Figure 3 of the [paper](http://openaccess.thecvf.com/content_CVPR_2019/html/Ribera_Locating_Objects_Without_Bounding_Boxes_CVPR_2019_paper.html)). If you then input...
Could you post how you fixed this issue? The code may be helpful for other people. Thank you,
None of the options you said you tried restricts the activation values to be below 1, so this sounds like a software bug. Please post your [unet_model.py](https://github.com/javiribera/locating-objects-without-bboxes/blob/master/object-locator/models/unet_model.py) file so that...
So your current problem is that with the unet_model.py you posted above you always get the estimated count value to be less than 1? Also I'm going to need you...
I think your intuition is correct. In fact, I remember I once tried something similar and experimented with a GAP layer connected to the probablility map. The difference is that...
> does the code supports now different size of the test images? No
The "results" you are trying to reproduce are these two plots? You seem to have posted the same plot repeated 3 times, by the way. However, the take-away from the...
What do you use the two remaining output channels for? Maybe the loss function is bumpy because of whatever loss function you apply to those?