hub
hub copied to clipboard
Question: How to exclusively make some of the top layers trainable in TFHub?
What happened?
I am doing finetuning using TF2.0 and using the keras models where I train the top 10 layers in the base model. I can easily set which layers to be trained and which not using the following code:
I want to do similar thing using the tfhub code but I don't know how to make those top layers exclusively trainable as the base_layer in TFHub model has not layers object.
How can I achieve the similar effect using the TFHub?
num_top_layer_in_base_model_to_train = 10
base_model = tf.keras.applications.ResNet50(
weights="imagenet", # Load weights pre-trained on ImageNet.
input_shape=(img_size, img_size, 3),
include_top=False,
) # Do not include the ImageNet classifier at the top.
# Freeze the base_model
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:-num_top_layer_in_base_model_to_train]:
layer.trainable = False
#Free the batch norm layers
for layer in base_model.layers[num_top_layer_in_base_model_to_train:]:
if isinstance(layer, layers.BatchNormalization):
layer.trainable = False
Relevant code
base_model = hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v1_50/classification/5",
input_shape=(224, 224, 3),
trainable=True,
arguments=dict(batch_norm_momentum=0.997))
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:-num_top_layer_in_base_model_to_train]:
layer.trainable = False
### Relevant log output
```shell
----> 7 for layer in base_model.layers[:-num_top_layer_in_base_model_to_train]:
8 layer.trainable = False
AttributeError: 'KerasLayer' object has no attribute 'layers'
tensorflow_hub Version
0.12.0 (latest stable release)
TensorFlow Version
2.8 (latest stable release)
Other libraries
No response
Python Version
3.x
OS
Linux
any insight?
@pindinagesh, do you have any insight into this?
Hi @ifahim
Sorry for the delayed response, still trying to find some pointers on this issue. Will update as soon as i get the pointers.
KerasLayer
returns one layer that cannot be split into more layers. Instead, I'd recommend using the accompanying feature vector model (https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5) which does not contain the top layer. Thus, you can add a new layer on top of the feature vector and only make the new layer trainable:
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5",
trainable=False),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
m.build([None, 224, 224, 3]) # Batch input shape.
Please see https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5 for the full documentation and https://www.tensorflow.org/hub/tutorials/tf2_image_retraining for an example.