This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#Author - @greentfrapp | |
def attention(self, query, key, value): | |
# Equation 1 in Vaswani et al. (2017) | |
# Scaled dot product between Query and Keys | |
output = tf.matmul(query, key, transpose_b=True) / (tf.cast(tf.shape(query)[2], tf.float32) ** 0.5) | |
# Softmax to get attention weights | |
attention_weights = tf.nn.softmax(output) | |
# Multiply weights by Values | |
weighted_sum = tf.matmul(attention_weights, value) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch.nn as nn | |
import torch.nn.functional as F | |
class Net(nn.Module): | |
def __init__(self): | |
super(Net, self).__init__() | |
self.conv1 = nn.Conv2d(3, 6, 5) | |
self.pool = nn.MaxPool2d(2, 2) | |
self.conv2 = nn.Conv2d(6, 16, 5) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from fastText import load_model | |
classifier = load_model("model_tweet.bin") | |
texts = ['Life is good', 'Life is great', 'Life is bad'] | |
labels = classifier.predict(texts) | |
print (labels) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#preprocessing | |
import nltk | |
#word2vec mode | |
import gensim | |
text_sample="""Renewed fighting has broken out in South Sudan between forces loyal to the president and vice-president. A reporter in the capital, Juba, told the BBC gunfire and large explosions could be heard all over the city; he said heavy artillery was being used. More than 200 people are reported to have died in clashes since Friday. The latest violence came hours after the UN Security Council called on the warring factions to immediately stop the fighting. In a unanimous statement, the council condemned the violence "in the strongest terms" and expressed "particular shock and outrage" at attacks on UN sites. It also called for additional peacekeepers to be sent to South Sudan. | |
Chinese media say two Chinese UN peacekeepers have now died in Juba. Several other peacekeepers have been injured, as well as a number of civilians who have been caught in crossfire. The latest round of violence erupted when troops loyal to President Salva Kiir and first Vic |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def cosine_similarity_ngrams(a, b): | |
vec1 = Counter(a) | |
vec2 = Counter(b) | |
intersection = set(vec1.keys()) & set(vec2.keys()) | |
numerator = sum([vec1[x] * vec2[x] for x in intersection]) | |
sum1 = sum([vec1[x]**2 for x in vec1.keys()]) | |
sum2 = sum([vec2[x]**2 for x in vec2.keys()]) | |
denominator = math.sqrt(sum1) * math.sqrt(sum2) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import dgl | |
import torch as th | |
g = dgl.DGLGraph() | |
g.add_nodes(10) | |
# A couple edges one-by-one | |
for i in range(1, 4): | |
g.add_edge(i, 0) | |
# A few more with a paired list | |
src = list(range(5, 8)); dst = [0]*3 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import os | |
import tensorflow as tf | |
if __name__ == '__main__': | |
#Step 1 - Inspect the test data | |
for example in tf.python_io.tf_record_iterator("test_data/before_2011_in_tr/in_tr.tfrecord"): | |
print(tf.train.Example.FromString(example)) | |
#Step 2 - Train the model on the test data |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
A.) Build | |
i. Install NPM Dependencies | |
ii. Run ES-Linter | |
iii. Run Code-Minifier | |
B.) Test | |
i. Run unit, functional and end-to-end test. | |
ii. Run pkg to compile Node.js application | |
C.) Deploy | |
i. Production | |
1.) Launch EC2 instance on AWS |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
import sys | |
q=13 | |
A=np.array([[4 ,1, 11, 10],[5, 5 ,9 ,5],[3, 9 ,0 ,10],[1, 3 ,3 ,2],[12, 7 ,3 ,4],[6, 5 ,11 ,4],[3, 3, 5, 0]]) | |
sA = np.array([[6],[9],[11],[11]]) | |
eA = np.array([[0],[-1],[1],[1],[1],[0],[-1]]) | |
bA = np.matmul(A,sA)%q | |
print bA | |
bA = np.add(bA,eA)%q | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# FGSM example code in PyTorch | |
def fgsm_attack(image, epsilon, data_grad): | |
# Collect the element-wise sign of the data gradient | |
sign_data_grad = data_grad.sign() | |
# Create the perturbed image by adjusting each pixel of the input image | |
perturbed_image = image + epsilon*sign_data_grad | |
# Adding clipping to maintain [0,1] range | |
perturbed_image = torch.clamp(perturbed_image, 0, 1) | |
# Return the perturbed image | |
return perturbed_image |