Roman Solovyev
Roman Solovyev
@tonmoyborah Current code supports retinanet. Here is the example. I will add it in test_bench later: ``` def get_RetinaNet_model(): from keras.models import load_model from keras.utils import custom_object_scope from keras_resnet.layers import...
I observed the same behaviour for RetinaNet. While many BN layers were removed speed of inference stays the same. P.S. Added RetinaNet in test_bench.
You can add it in function "get_copy_of_layer" https://github.com/ZFTurbo/Keras-inference-time-optimizer/blob/master/kito.py#L72 There are 2 examples already added for "relu6" and "BilinearUpsampling" layers.
Yes I used RetinaNet but wasn't able to create KITO code that supports it, because it has too complex structure. I still have plans to do it. When it'll be...
I was able to optimze RetinaNet, but for some reason it works absolutely at the same speed as unoptimized version.
Can you point on TransposeConv layer in Keras code or documentation?
Thanks, I will look into it.
I added support for `Conv2DTranspose` layer. Currently it available in repo. I'll make pypi release a little bit later,
It's hard to say what's wrong. I only tested code on Python 3.5 + Tensorflow backend. Are you sure the problem is with Callback? As I can see from Error...
It's not really a bug, KITO just use name of second layer. Looks like I did it on purpose. I found related comment in code ) ``` # We use...