Skip to content

Instantly share code, notes, and snippets.

@hamelsmu
Created May 26, 2018 19:25
Show Gist options
  • Save hamelsmu/9e129b6dbc307de2a575d3583d1d0ed5 to your computer and use it in GitHub Desktop.
Save hamelsmu/9e129b6dbc307de2a575d3583d1d0ed5 to your computer and use it in GitHub Desktop.
# extract encoder
encoder_model = extract_encoder_model(seq2seq_Model)
# Freeze Encoder Model
for l in encoder_model.layers:
l.trainable = False
#### Build Model Architecture For Fine-Tuning ####
encoder_inputs = Input(shape=(doc_length,), name='Encoder-Input')
enc_out = encoder_model(encoder_inputs)
# first dense layer with batch norm
x = Dense(500, activation='relu')(enc_out)
x = BatchNormalization(name='bn-1')(x)
# dense layer for output
out = Dense(500)(x)
# keras model object
code2emb_model = Model([encoder_inputs], out)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment