Last active
March 21, 2019 15:23
-
-
Save himanshurawlani/b47578856c35c545d838e074a3cab911 to your computer and use it in GitHub Desktop.
Fine-tuning Inception V3 network by making base layers trainable and setting a lower learning rate.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Un-freeze the top layers of the model | |
base_model.trainable = True | |
# Let's take a look to see how many layers are in the base model | |
print("Number of layers in the base model: ", len(base_model.layers)) | |
# Fine tune from this layer onwards | |
fine_tune_at = 249 | |
# Freeze all the layers before the `fine_tune_at` layer | |
for layer in base_model.layers[:fine_tune_at]: | |
layer.trainable = False | |
# Compile the model using a much lower learning rate. | |
inception_model.compile(optimizer = tf.keras.optimizers.RMSprop(lr=0.0001), | |
loss='sparse_categorical_crossentropy', | |
metrics=['accuracy']) | |
history_fine = inception_model.fit(train.repeat(), | |
steps_per_epoch = steps_per_epoch, | |
epochs = finetune_epochs, | |
initial_epoch = initial_epoch, | |
validation_data = validation.repeat(), | |
validation_steps = validation_steps) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment