Skip to content

Instantly share code, notes, and snippets.

@kunci115
Created February 9, 2019 04:13
Show Gist options
  • Save kunci115/81ab2eae9c3347a7644c8ce74a15739e to your computer and use it in GitHub Desktop.
Save kunci115/81ab2eae9c3347a7644c8ce74a15739e to your computer and use it in GitHub Desktop.
vocab_size = ...
src_txt_length = ...
sum_txt_length = ...
# encoder input model
inputs = Input(shape=(src_txt_length,))
encoder1 = Embedding(vocab_size, 128)(inputs)
encoder2 = LSTM(128)(encoder1)
encoder3 = RepeatVector(sum_txt_length)(encoder2)
# decoder output model
decoder1 = LSTM(128, return_sequences=True)(encoder3)
outputs = TimeDistributed(Dense(vocab_size, activation='softmax'))(decoder1)
# tie it together
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment