keras-io
keras-io copied to clipboard
Retinanet with mobilenet backend
@srihari-humbarwadi
Can we adapt to different backend for RetinaNet such as mobilenet ?
@siriusmehta Yeah you can added any backbone you wish, only thing that you need to take care is the scales of the feature map that go into the FPN
Thanks @srihari-humbarwadi
I tried replacing Resent50 backbone with below:
default MobilenetV2 pertained downloaded is 224x224
def get_backbone(): """Builds ResNet50 with pre-trained imagenet weights""" backbone = keras.applications.MobileNetV2( include_top=False, input_shape=[None, None, 3] ) c3_output, c4_output, c5_output = [ backbone.get_layer(layer_name).output for layer_name in ["block_12_add", "block_14_add", "out_relu"] ] return keras.Model( inputs=[backbone.inputs], outputs=[c3_output, c4_output, c5_output] )
and getting below error:
InvalidArgumentError Traceback (most recent call last)
8 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 58 ctx.ensure_initialized() 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: 62 if name is not None:
InvalidArgumentError: Incompatible shapes: [2,24,32,256] vs. [2,48,64,256]
[[node gradient_tape/RetinaNet/FeaturePyramid/add/BroadcastGradientArgs (defined at
Function call stack: train_function
I didn't run the code with your changes. But by looking at the error message, its likely that you are feeding in the wrong feature layers from the backbone. IIRC C3
, C4
and C5
should have strides /8, /16 and /32. Just pick the layers accordingly
Hi @srihari-humbarwadi
Thanks for your reply.
I am bit new and in exploring and learning stage .
Can you please point me to some useful links where I can read and understand more about
IIRC C3, C4 and C5 should have strides /8, /16 and /32
How the layers strides calculates to /8, /16 and /32 ?
Would appreciate your help!
Thanks
Hi @siriusmehta, bellow is Mobilenetv2: base_model = MobileNetV2(include_top=False, input_shape=INPUT_SHAPE) base_model.trainable = training c3_output, c4_output, c5_output = [ base_model.get_layer(layer_name).output for layer_name in ["block_6_expand_relu", "block_13_expand_relu", "out_relu"] ]
Is it possible use imaginet weight ?
Hey folks, thanks for the discussion. Because this is not a bug, it would be better to discuss it in one of the forums below:
I'll go ahead and close this issue. Thanks!