Somshubra Majumdar
Somshubra Majumdar
I kind of guessed that the provided learning rates in the paper were too low to be of any use, which is why I switch to Adam with higher learning...
At the time of creation I believe it was TF 1.6 or 1.8. I don't recall. In either case, I currently use 1.12 and find no issues.
Yes please send a PR
This isn't a memory leak as much as it is an inefficient implementation. The weights and parameters are not consuming the vast majority of the memory, but the intermediate products...
Sequential expects Layers, whereas `augmented_conv` expects Tensors as input. You cannot use `augmented_conv(...)` with a python tuple of the input shape. It requires a Keras Input layer or a Keras...
If you have any ideas to make it clearer, do share and I'll incorporate it in the readme
Ok so that is a combination of many issues. First and foremost, when using a function, understand what the default values represent. You are using the default value of depth_k...
If you could post the stack trace that would be more helpful.
Could you post a code snippet where this issue occurs?
The se blocks are not a layer, just a set of additional convolutions. They will not have the name "se" attached to them.