Skip to content

Instantly share code, notes, and snippets.

View kylemcdonald's full-sized avatar

Kyle McDonald kylemcdonald

View GitHub Profile
@cibomahto
cibomahto / format.h
Last active September 21, 2021 15:04
Pattern file format
//! @brief Pattern file recorder/playback
//!
//! The purpose is to allow capure and playback of streamed pattern data. The
//! recorder is intended to capture raw data and sync packets directly from the
//! listener (ie, before mapping or color/brightness manipulation is applied).
//! During playback, the raw packets are sent to the mapper for processing.
//! This allows the mapping and output settings to be adjusted after recording.
//!
//! Packets are recorded with a time resolution of 1 ms.
//!
'use strict';
const fetch = require('node-fetch');
const msautils = require('./utils');
let apikey;
function setApiKey(_apikey) {
apikey = _apikey;
@knandersen
knandersen / morphagene_ableton.py
Last active August 23, 2024 01:55 — forked from ferrihydrite/morphagene_audacity.py
Allows you to use Ableton projects and exports as reels for the Make Noise Morphagene eurorack module. Since a few people have found the script not working or difficulty getting python to work, I have created a web-based tool: https://knandersen.github.io/morphaweb/
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
USAGE:
morphagene_ableton.py -w <inputwavfile> -l <inputlabels> -o <outputfile>'
Instructions in Ableton:
Insert locators as splice markers in your project (Create > Add Locator)
Export Audio/Video with
Sample Rate: 48000 Hz
@Mahedi-61
Mahedi-61 / cuda_11.8_installation_on_Ubuntu_22.04
Last active October 28, 2024 20:42
Instructions for CUDA v11.8 and cuDNN 8.9.7 installation on Ubuntu 22.04 for PyTorch 2.1.2
#!/bin/bash
### steps ####
# Verify the system has a cuda-capable gpu
# Download and install the nvidia cuda toolkit and cudnn
# Setup environmental variables
# Verify the installation
###
### to verify your gpu is cuda enable check
@carlthome
carlthome / Signal reconstruction from spectrograms.ipynb
Created May 31, 2018 13:53
Try to recover audio from filtered magnitudes when phase information has been lost.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@f0k
f0k / LICENSE
Last active January 15, 2023 22:32
STFT Benchmarks on CPU and GPU in Python
MIT License
Copyright (c) 2017 Jan Schlüter
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
@victor-shepardson
victor-shepardson / pytorch-glumpy.py
Last active October 31, 2024 06:28
using pycuda and glumpy to draw pytorch GPU tensors to the screen without copying to host memory
from contextlib import contextmanager
import numpy as np
import torch
from torch import Tensor, ByteTensor
import torch.nn.functional as F
from torch.autograd import Variable
import pycuda.driver
from pycuda.gl import graphics_map_flags
from glumpy import app, gloo, gl
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active July 25, 2023 18:07
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.