Last active
April 1, 2018 19:22
-
-
Save ugik/cd3b74030512fcd3f7d066904bb7b0a1 to your computer and use it in GitHub Desktop.
LSTM RNN tflearn example
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Network building | |
net = tflearn.input_data([None, 100]) | |
net = tflearn.embedding(net, input_dim=10000, output_dim=128) | |
net = tflearn.lstm(net, 128, dropout=0.8) | |
net = tflearn.fully_connected(net, 2, activation='softmax') | |
net = tflearn.regression(net, optimizer='adam', learning_rate=0.001, | |
loss='categorical_crossentropy') | |
# Training | |
model = tflearn.DNN(net) | |
model.fit(trainX, trainY, validation_set=(testX, testY), show_metric=True, | |
batch_size=32) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
ugik, can you explain how this NN architecture process data? Is it feed input vector one by one 100 times or just put hole vectore to the embedding layer?