Skip to content

Instantly share code, notes, and snippets.

View sdwfrost's full-sized avatar

Simon Frost sdwfrost

View GitHub Profile
@sdwfrost
sdwfrost / frequency_estimator.py
Created June 7, 2021 23:28 — forked from endolith/frequency_estimator.py
Frequency estimation methods in Python
from __future__ import division
from numpy.fft import rfft
from numpy import argmax, mean, diff, log, nonzero
from scipy.signal import blackmanharris, correlate
from time import time
import sys
try:
import soundfile as sf
except ImportError:
from scikits.audiolab import flacread

GCC compiler optimization for ARM-based systems

2017-03-03 fm4dd

The gcc compiler can optimize code by taking advantage of CPU specific features. Especially for ARM CPU's, this can have impact on application performance. ARM CPU's, even under the same architecture, could be implemented with different versions of floating point units (FPU). Utilizing full FPU potential improves performance of heavier operating systems such as full Linux distributions.

-mcpu, -march: Defining the CPU type and architecture

These flags can both be used to set the CPU type. Setting one or the other is sufficient.

@sdwfrost
sdwfrost / gist:e40be35990bfde49c665fe5d1455ed94
Created December 2, 2020 19:38 — forked from alirezaahrabian/gist:4efca034ff242315a97252526c82d81e
Change Detection Using Volatility Filters
#LMS change Detector
function [state,h]=ChangeDetectionLMSShiftMultipleSensor1(x,WinSlow,WinFast,WinDesired,muBase)
%Slow Response to change, accurate in steady state
[m,n]=size(x);
if m>n
x=x';
end
[NumSensors,n]=size(x);
shift=300;
@sdwfrost
sdwfrost / min-char-rnn.py
Created November 5, 2018 17:50 — forked from karpathy/min-char-rnn.py
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
# Here's how to back up a named volume
# 1. Using a `ubuntu` image, we mount the named volume (`myproj_dbdata`) to a `/dbdata` folder inside the `ubuntu` container.
# 2. Then, we create a new folder inside the `ubuntu` container named `/backup`.
# 3. We then create an archive containing the contents of the `/dbdata` folder and we store it inside the `/backup` folder (inside the container).
# 4. We also mount the `/backup` folder from the container to the docker host (your local machine) in a folder named `/backups` inside the current directory.
docker run --rm -v myproj_dbdata:/dbdata -v $(pwd)/backups:/backup ubuntu tar cvf /backup/db_data_"$(date '+%y-%m-%d')".tar /dbdata
#! /usr/bin/env python
import pexpect
import pexpect.replwrap
repl = pexpect.replwrap.REPLWrapper("lua", u"> ", None, u"> ")
output = repl.run_command("= 1 + 1", timeout=1).splitlines()[1:]
assert(int(output[0]) == 2)
# Working example for my blog post at:
# http://danijar.com/variable-sequence-lengths-in-tensorflow/
import functools
import sets
import tensorflow as tf
from tensorflow.models.rnn import rnn_cell
from tensorflow.models.rnn import rnn
def lazy_property(function):
@sdwfrost
sdwfrost / min-char-rnn-tensorflow.py
Created April 1, 2018 19:49 — forked from vinhkhuc/min-char-rnn-tensorflow.py
Vanilla Char-RNN using TensorFlow
"""
Vanilla Char-RNN using TensorFlow by Vinh Khuc (@knvinh).
Adapted from Karpathy's min-char-rnn.py
https://gist.github.com/karpathy/d4dee566867f8291f086
Requires tensorflow>=1.0
BSD License
"""
import random
import numpy as np
import tensorflow as tf
#![feature(lang_items)]
#![no_std]
#[no_mangle]
pub fn add_one(x: i32) -> i32 {
x + 1
}
// needed for no_std
@sdwfrost
sdwfrost / gist:e8a83f7a2ffdc0375fdda9a7d27acd36
Created February 23, 2018 09:11 — forked from Flexi23/gist:1713774
GLSL 2D vector buffer in a texture with a custom floating point precision
/*
These are the helper functions to store and to restore a 2D vector with a custom 16 floating point precision in a texture.
The 16 bit are used as follows: 1 bit is for the sign, 4 bits are used for the exponent, the remaining 11 bit are for the mantissa.
The exponent bias is asymmetric so that the maximum representable number is 2047 (and bigger numbers will be cut)
the accuracy from 1024 - 2047 is one integer
512-1023 it's 1/2 int
256-511 it's 1/4 int and so forth...
between 0 and 1/16 the accuracy is the highest with 1/2048 (which makes 1/32768 the minimum representable number)