Skip to content

Instantly share code, notes, and snippets.

View bdqnghi's full-sized avatar
🧧
Focusing

Nghi D. Q. Bui bdqnghi

🧧
Focusing
View GitHub Profile
@bdqnghi
bdqnghi / weight_init.py
Created September 26, 2018 12:14 — forked from jeasinema/weight_init.py
A simple script for parameter initialization for PyTorch
#!/usr/bin/env python
# -*- coding:UTF-8 -*-
import torch
import torch.nn as nn
import torch.nn.init as init
def weight_init(m):
'''
@bdqnghi
bdqnghi / rank_metrics.py
Created May 22, 2018 19:15 — forked from bwhite/rank_metrics.py
Ranking Metrics
"""Information Retrieval metrics
Useful Resources:
http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt
http://www.nii.ac.jp/TechReports/05-014E.pdf
http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf
http://hal.archives-ouvertes.fr/docs/00/72/67/60/PDF/07-busa-fekete.pdf
Learning to Rank for Information Retrieval (Tie-Yan Liu)
"""
import numpy as np
@bdqnghi
bdqnghi / gist:54b5ffeb22447336bde51f61d9fcedf7
Created September 27, 2017 17:56 — forked from brendano/gist:963c826e7109a5e50d54
papers that do NLP-like stuff with source code
NLP and source code papers, very scattered and partial listing
(collected by Nathan Schneider and Brendan O'Connor)
ICML 2014
Maddison and Tarlow
Structured Generative Models of Natural Source Code
http://jmlr.org/proceedings/papers/v32/maddison14.pdf
ACL 2013
@bdqnghi
bdqnghi / subexpr.py
Created September 19, 2017 18:09 — forked from anj1/subexpr.py
import types
import tensorflow as tf
import numpy as np
# Expressions are represented as lists of lists,
# in lisp style -- the symbol name is the head (first element)
# of the list, and the arguments follow.
# add an expression to an expression list, recursively if necessary.
def add_expr_to_list(exprlist, expr):
@bdqnghi
bdqnghi / simple_mlp_tensorflow.py
Created July 2, 2017 19:56 — forked from vinhkhuc/simple_mlp_tensorflow.py
Simple Feedforward Neural Network using TensorFlow
# Implementation of a simple MLP network with one hidden layer. Tested on the iris data set.
# Requires: numpy, sklearn>=0.18.1, tensorflow>=1.0
# NOTE: In order to make the code simple, we rewrite x * W_1 + b_1 = x' * W_1'
# where x' = [x | 1] and W_1' is the matrix W_1 appended with a new row with elements b_1's.
# Similarly, for h * W_2 + b_2
import tensorflow as tf
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
@bdqnghi
bdqnghi / autoencoder.py
Created May 29, 2017 19:42 — forked from saliksyed/autoencoder.py
Tensorflow Auto-Encoder Implementation
""" Deep Auto-Encoder implementation
An auto-encoder works as follows:
Data of dimension k is reduced to a lower dimension j using a matrix multiplication:
softmax(W*x + b) = x'
where W is matrix from R^k --> R^j
A reconstruction matrix W' maps back from R^j --> R^k
@bdqnghi
bdqnghi / gist:325e4ff7431cc9434d67d580312adcbc
Created April 27, 2017 15:13 — forked from debasishg/gist:8172796
A collection of links for streaming algorithms and data structures
  1. General Background and Overview
@bdqnghi
bdqnghi / README.markdown
Created April 17, 2017 18:07 — forked from alloy/README.markdown
Learn the LLVM C++ API by example.

The easiest way to start using the LLVM C++ API by example is to have LLVM generate the API usage for a given code sample. In this example it will emit the code required to rebuild the test.c sample by using LLVM:

$ clang -c -emit-llvm test.c -o test.ll
$ llc -march=cpp test.ll -o test.cpp
@bdqnghi
bdqnghi / llvm create cfg.sh
Created April 17, 2017 17:20
create CFG with llvm
#!/bin/bash
# create .bc file
clang -emit-llvm -c testcfg.c
# create .dot file
opt -dot-cfg testcfg.bc
# cteate png,pdf
dot -Tpng -o main.png cfg.main.dot