Skip to content

Instantly share code, notes, and snippets.

@redwrasse
redwrasse / mock_refmon.go
Last active February 19, 2022 06:01
A mock reference monitor for accessing a top secret recipe
// A mock reference monitor design for a mock security system, enforcing the following access control policy:
// Only the user 'Alice' who identifies herself on requested inputs
// with her name and her birthday of '01.01.1980' is granted access to
// the top secret recipe file, stored on disk.
// This reference monitor design provides (some) security towards the given access policy
// by a) storing a hash rather than the values of the user inputs of (name, birthday) to compare against,
// and b) using a salt of the valid inputs as a symmetric key for AES encrypting the top secret recipe file beforehand and
// for decrypting before returning to Alice.
//
// Of course these comments would not be kept in the source code.
@redwrasse
redwrasse / mc1.py
Created July 2, 2020 01:34
monte carlo integration of functions on subset of the real line
"""
Monte Carlo integration of functions on subsets of the real line,
using uniform probability distributions
"""
import math
import random
def montecarlo(f, g, a, b):
"""
@redwrasse
redwrasse / objectrackerstep1.html
Created July 17, 2020 01:41
object tracker step 1
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Object Tracking Service</title>
<h1>Object Tracker</h1>
</head>
<body>
<video controls width="1250">
@redwrasse
redwrasse / conv_ar.py
Last active August 11, 2020 19:08
demonstrating auto-regressive model (motivating full generative model) as trained convolutional layer
"""
---------------------------------------------------
Output:
epoch loss: 78.85499735287158
epoch loss: 0.0008048483715437094
epoch loss: 7.917497569703835e-06
epoch loss: 7.784523854692527e-08
epoch loss: 1.082900831506084e-09
"""
Currently trains with decreasing loss
*** epoch: 0 epoch loss: 276.47448682785034
*** epoch: 1 epoch loss: 216.9058997631073
*** epoch: 2 epoch loss: 190.01888144016266
*** epoch: 3 epoch loss: 171.68642991781235
*** epoch: 4 epoch loss: 157.7317717075348
*** epoch: 5 epoch loss: 145.89844578504562
...
@redwrasse
redwrasse / backprop_ex.py
Last active February 11, 2021 05:51
example backprop
"""
backprop algorithm on F[a] = xa^2 + a
For each node function f_i
- method to calculate value on input, f_i(x_i)
- method to calculate derivative value on input, K_ij
- method to calculate parameter derivative on input, xi_i
Suppose a single input x in R. Suppose the functional to be
optimized is F[a] = xa^2 + a
@redwrasse
redwrasse / gist:221f0d2bb566c616697d3e509e31d784
Created November 12, 2020 21:51
learning query completion as all next-character models
"""
attempting to learn query completion as product of all next character models
P(x_c|x_q) = prod_i P(x_i | x_1:i-1)
Does not scale well for any reasonable sized text document(s). Need smaller length distributions, approximations.
"""
import numpy as np
import random
import tensorflow.keras as keras
@redwrasse
redwrasse / array_reinforcement_learning.py
Created November 13, 2020 17:46
Toy reinforcement learning on an array
# array_reinforcement_learning.py
"""
array_reinforcement_learning
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reinforcement learning is performed on a 1-dimensional
finite state space ("array") of k elements:
S = {1,...,k}
There are two possible actions: move right (a = 1), or move left (a = -1),
@redwrasse
redwrasse / gd_gaussian.py
Last active November 20, 2020 23:03
gradient descent solving mu, sigma for generative gaussian
# -*- coding: utf-8 -*-
"""
generative gaussian model, minimizing <-log p>_data wrt. mu, sigma
with gradient descent
-log p = log sigma + log sqrt(2pi) + (x - mu)^2 / (2 sigma^2)
So
grad_mu (-log p) = - (x - mu) / sigma^2
grad_sigma (-log p) = 1 / sigma - (x - mu)^2 / sigma^3
@redwrasse
redwrasse / gmm_gd.py
Created November 21, 2020 00:04
Attempted direct gradient descent on 2-state gaussian mixture model
# gmm_gd.py
"""
Direct gradient descent on 2-state gaussian mixture model.
Not the best way to do this, typically use the EM algorithm instead.
Training is highly unstable.
model:
p(x) = pi * phi_1 + (1-pi) * phi_2
phi_1, phi_2 ~ normal