Skip to content

Instantly share code, notes, and snippets.

View sdwfrost's full-sized avatar

Simon Frost sdwfrost

View GitHub Profile
@sdwfrost
sdwfrost / gpu.js
Created February 23, 2018 16:27
UMD version of gpu.js
/* BEGIN node2umd PREFIX */
;(function(define) { define(function(require, exports, module) {
/* END node2umd PREFIX */
/**
* gpu.js
* http://gpu.rocks/
*
* GPU Accelerated JavaScript
*
* @version 1.0.0-rc.10
@sdwfrost
sdwfrost / gist:e8a83f7a2ffdc0375fdda9a7d27acd36
Created February 23, 2018 09:11 — forked from Flexi23/gist:1713774
GLSL 2D vector buffer in a texture with a custom floating point precision
/*
These are the helper functions to store and to restore a 2D vector with a custom 16 floating point precision in a texture.
The 16 bit are used as follows: 1 bit is for the sign, 4 bits are used for the exponent, the remaining 11 bit are for the mantissa.
The exponent bias is asymmetric so that the maximum representable number is 2047 (and bigger numbers will be cut)
the accuracy from 1024 - 2047 is one integer
512-1023 it's 1/2 int
256-511 it's 1/4 int and so forth...
between 0 and 1/16 the accuracy is the highest with 1/2048 (which makes 1/32768 the minimum representable number)
/* This is a naive implementation of Jonathan McCabe's elaboration of
* Alan Turing's model of morphogenesis, as described here:
* http://www.jonathanmccabe.com/Cyclic_Symmetric_Multi-Scale_Turing_Patterns.pdf
*/
extern crate rand;
use rand::Rng;
use std::env;
use std::io::Write;
@sdwfrost
sdwfrost / edges.csv
Created February 22, 2018 11:28
Subset of data from Kenyan Households Contact Data (household E, day 1) http://www.sociopatterns.org/datasets/kenyan-households-contact-network/
source target hour duration
2 4 7 40
2 4 7 80
2 4 7 80
2 4 7 80
2 4 7 20
2 4 7 40
2 4 7 80
2 4 7 200
2 4 7 80
@sdwfrost
sdwfrost / julia_nim_cpp_r_sir.md
Last active July 2, 2022 14:15
Comparing simple simulations in Julia, Nim, C++ and R

This gist compares the performance of Julia, Nim, C++ and R - the latter using either POMP, or LibBi in a simple simulation of an SIR epidemiological model. In addition to keeping track of susceptibles, infecteds and recovereds, I also store the cumulative number of infections. Time moves in discrete steps, and the algorithm avoids language-specific syntax features to make the comparison as fair as possible, including using the same algorithm for generating binomial random numbers and the same random number generator; the exception are the R versions, POMP uses the standard R Mersenne Twister for the random number generator; I'm not sure what LibBi uses. The algorithm for generating random binomial numbers is only really suitable for small np.

Benchmarks were run on a Mac Pro (Late 2013), with 3 Ghz 8-core Intel Xeon E3, 64GB 1866 Mhz RAM, running OSX v 10.11.3 (El Capitan

@sdwfrost
sdwfrost / pomp_integrator.ipynb
Created November 29, 2017 13:19
Discrete SIR model using the DiffEq integrator interface
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@sdwfrost
sdwfrost / morris-lecar.ipynb
Created November 7, 2017 19:36
Simulation of a stochastic Morris-Lecar model
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
# Example for my blog post at:
# https://danijar.com/introduction-to-recurrent-networks-in-tensorflow/
import functools
import sets
import tensorflow as tf
def lazy_property(function):
attribute = '_' + function.__name__
@sdwfrost
sdwfrost / lstm.py
Created May 17, 2017 11:54 — forked from pranv/lstm.py
An Efficient, Batched, Stateful LSTM layer in Numpy
import numpy as np
from utils import orthogonal, tanh, sigmoid, dtanh, dsigmoid
class LSTM(object):
"""Long Short Term Memory Unit
Parameters
----------
@sdwfrost
sdwfrost / batch-lstm.R
Created May 17, 2017 11:53 — forked from georgeblck/batch-lstm.R
An efficient, batched LSTM in R
###
### This is a batched LSTM forward and backward pass. Written by Andrej Karpathy (@karpathy)
### BSD License
### Re-written in R by @georgeblck
###
rm(list=ls(all=TRUE))
LSTM.init <- function(input_size, hidden_size, fancy_forget_bias_init = 3){
# Initialize parameters of the LSTM (both weights and biases in one matrix)