Skip to content

Instantly share code, notes, and snippets.

View rmdort's full-sized avatar

Vinay M rmdort

View GitHub Profile

Open session Modal

MOM.modal.openSessionModal(options);

Options

title: 'Your session is about to expire',
message: 'You will be logged out in <strong>{time}</strong> seconds. Do you want to stay logged in?',

confirm_action: 'javascript:fnConfirmSessionExtend()',

from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory, listenWS
from twisted.python import log
from autobahn.util import newid, utcnow
import Cookie
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
from chatterbot.conversation import Statement
from users import Users
import sys
@rmdort
rmdort / toNestedArray
Last active January 17, 2017 02:53
Solr path hierarchy to Javascript object
function toNestedArray (data, rootLevel = 0, parentNode) {
let output = []
for (var i = 0; i < data.length; i++) {
var count = data[i].count
var items = data[i].name.split('/')
var hasParent = items.length > rootLevel
if (hasParent) {
let parent = rootLevel
? items.length === rootLevel
? null
@rmdort
rmdort / keras_gensim_embeddings.py
Created April 3, 2017 03:41 — forked from codekansas/keras_gensim_embeddings.py
Using Word2Vec embeddings in Keras models
from __future__ import print_function
import json
import os
import numpy as np
from gensim.models import Word2Vec
from gensim.utils import simple_preprocess
from keras.engine import Input
from keras.layers import Embedding, merge
@rmdort
rmdort / Attention.py
Created April 24, 2017 03:23 — forked from luthfianto/Attention.py
Keras Layer that implements an Attention mechanism for temporal data. Supports Masking. Follows the work of Raffel et al. [https://arxiv.org/abs/1512.08756]
from keras.layers.core import Layer
from keras import initializers, regularizers, constraints
from keras import backend as K
class Attention(Layer):
def __init__(self,
kernel_regularizer=None, bias_regularizer=None,
kernel_constraint=None, bias_constraint=None,
use_bias=True, **kwargs):
"""
@rmdort
rmdort / siamese_lstm.py
Created May 1, 2017 16:49 — forked from slashvar/siamese_lstm.py
LSTM siamese network (masking issues)
from keras import backend as K
from keras.layers import Input, Dense, merge, Dropout, Lambda, LSTM, Masking
from keras.models import Model, Sequential
from keras.optimizers import SGD, RMSprop, Adam, Nadam
from sys import argv
import argparse
import csv
import json
import numpy as np
import pickle
@rmdort
rmdort / keras_attention.py
Last active December 18, 2017 13:22
Similar to attention layer in https://github.com/synthesio/hierarchical-attention-networks . Ported to keras 2
class AttentionLayer(Layer):
'''
Attention layer.
Usage:
lstm_layer = LSTM(dim, return_sequences=True)
attention = AttentionLayer()(lstm_layer)
sentenceEmb = merge([lstm_layer, attention], mode=lambda x:x[1]*x[0], output_shape=lambda x:x[0])
sentenceEmb = Lambda(lambda x:K.sum(x, axis=1), output_shape=lambda x:(x[0],x[2]))(sentenceEmb)
'''
def __init__(self, init='glorot_uniform', kernel_regularizer=None, bias_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs):
@rmdort
rmdort / AttentionWithContext.py
Last active February 10, 2021 14:02 — forked from cbaziotis/AttentionWithContext.py
Keras Layer that implements an Attention mechanism, with a context/query vector, for temporal data. Supports Masking. Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf] "Hierarchical Attention Networks for Document Classification"
class AttentionWithContext(Layer):
"""
Attention operation, with a context/query vector, for temporal data.
Supports Masking.
Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf]
"Hierarchical Attention Networks for Document Classification"
by using a context vector to assist the attention
# Input shape
3D tensor with shape: `(samples, steps, features)`.
# Output shape
@rmdort
rmdort / gist:3bd586a29febd3bd1e28bca7f5c3fcaf
Last active July 25, 2017 09:31
expose-loader redux react-redux
{ test: require.resolve("react"), loader: "expose-loader?React" },
{ test: require.resolve("react-dom"), loader: "expose-loader?ReactDOM" },
{
test: /redux\/es\/index.js/,
use: [
{
loader: 'expose-loader',
options: 'Redux'
}
]
@rmdort
rmdort / attention_lstm.py
Created August 1, 2017 14:23 — forked from mbollmann/attention_lstm.py
My attempt at creating an LSTM with attention in Keras
class AttentionLSTM(LSTM):
"""LSTM with attention mechanism
This is an LSTM incorporating an attention mechanism into its hidden states.
Currently, the context vector calculated from the attended vector is fed
into the model's internal states, closely following the model by Xu et al.
(2016, Sec. 3.1.2), using a soft attention model following
Bahdanau et al. (2014).
The layer expects two inputs instead of the usual one: