Train on 1584000 samples, validate on 216000 samples
Epoch 1/7
1584000/1584000 [================] - 411s 259us/step -
loss: 2.6989 - val_loss: 2.3833
Epoch 2/7
1584000/1584000 [================] - 265s 167us/step -
loss: 2.2941 - val_loss: 2.3035
Epoch 3/7
1584000/1584000 [================] - 264s 167us/step -
loss: 2.2085 - val_loss: 2.2740
Epoch 4/7
1584000/1584000 [================] - 265s 167us/step -
loss: 2.1583 - val_loss: 2.2611
Epoch 5/7
1584000/1584000 [================] - 267s 168us/step -
loss: 2.1226 - val_loss: 2.2555
Epoch 6/7
1584000/1584000 [================] - 265s 167us/step -
loss: 2.0947 - val_loss: 2.2521
Epoch 7/7
1584000/1584000 [================] - 264s 167us/step -
loss: 2.0718 - val_loss: 2.2563
Last active
January 15, 2018 21:17
-
-
Save hamelsmu/5e8945884591af54fbd23da0093d046e to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from keras.callbacks import CSVLogger, ModelCheckpoint | |
#setup callbacks for model logging | |
script_name_base = 'tutorial_seq2seq' | |
csv_logger = CSVLogger('{:}.log'.format(script_name_base)) | |
model_checkpoint = ModelCheckpoint('{:}.epoch{{epoch:02d}}-val{{val_loss:.5f}}.hdf5'.format(script_name_base), | |
save_best_only=True) | |
# pass arguments to model.fit | |
batch_size = 1200 | |
epochs = 7 | |
history = seq2seq_Model.fit([encoder_input_data, decoder_input_data], np.expand_dims(decoder_target_data, -1), | |
batch_size=batch_size, | |
epochs=epochs, | |
validation_split=0.12, callbacks=[csv_logger, model_checkpoint]) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment