This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| #!/usr/bin/env python3 | |
| """Usage: conv_checkpoints_to_model.py MODFILE | |
| Takes a trained model file with multiple saved checkpoints and converts these | |
| checkpoints into standalone models. This allows the different checkpoints to be | |
| used, e.g., as parts of a model ensemble. | |
| This script will: | |
| - Analyze MODFILE to find all saved model components |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| --- theano/sandbox/cuda/opt.py 2017-05-31 23:26:09.972668647 +0200 | |
| +++ theano/sandbox/cuda/opt_patched.py 2017-06-01 00:49:43.818626738 +0200 | |
| @@ -38,10 +38,12 @@ | |
| GpuElemwise, GpuDimShuffle, GpuReshape, GpuCAReduce, | |
| gpu_flatten, | |
| GpuSubtensor, GpuAdvancedSubtensor1, | |
| - GpuAdvancedIncSubtensor1, GpuAdvancedIncSubtensor1_dev20, | |
| + GpuAdvancedIncSubtensor1, | |
| GpuIncSubtensor, gpu_alloc, GpuAlloc, gpu_shape, GpuSplit, GpuAllocEmpty) | |
| from theano.sandbox.cuda.opt_util import pad_dims, unpad_dims |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| #!/usr/bin/python3 | |
| # -*- coding: utf-8 -*- | |
| import argparse | |
| import bibtexparser | |
| from collections import Counter | |
| import matplotlib.pyplot as plt | |
| import seaborn as sns | |
| import sys |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| class AttentionLSTM(LSTM): | |
| """LSTM with attention mechanism | |
| This is an LSTM incorporating an attention mechanism into its hidden states. | |
| Currently, the context vector calculated from the attended vector is fed | |
| into the model's internal states, closely following the model by Xu et al. | |
| (2016, Sec. 3.1.2), using a soft attention model following | |
| Bahdanau et al. (2014). | |
| The layer expects two inputs instead of the usual one: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Source: | |
| # https://github.com/farizrahman4u/seq2seq/blob/master/seq2seq/layers/state_transfer_lstm.py | |
| from keras import backend as K | |
| from keras.layers.recurrent import LSTM | |
| class StateTransferLSTM(LSTM): | |
| """LSTM with the ability to transfer its hidden state. | |
| This layer behaves just like an LSTM, except that it can transfer (or |
NewerOlder