EXPLOSION
We live in a strange time.
Extraordinary events keep happening that undermine the stability of our world.
Suicide bombs, waves of refugees,
Donald Trump, Vladimir Putin, even Brexit.
| VAGRANTFILE_API_VERSION = "2" | |
| Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| | |
| config.vm.box = "debian7" | |
| config.vm.box_url = "https://dl.dropboxusercontent.com/s/xymcvez85i29lym/vagrant-debian-wheezy64.box" | |
| config.vm.network :forwarded_port, host: 4000, guest: 4000 | |
| config.vm.provision :shell, :path => "bootstrap.sh" | |
| config.ssh.forward_agent = true |
| # -*- coding: utf-8 -*- | |
| """ | |
| To use this, drop the file | |
| 'Full Results - Stack Overflow Developer Survey - 2015.csv' | |
| from | |
| https://drive.google.com/file/d/0Bzd_CzYvUxE5U1NSWnA2SFVKX00/view |
| def work(): | |
| print 0; yield | |
| print 1; yield | |
| print 2; yield | |
| worker = work() | |
| for i in range(2+1): | |
| next(worker) | |
| if i == 1: break |
| # -*- coding: utf-8 -*- | |
| """ | |
| To use this, drop the file | |
| 'Full Results - Stack Overflow Developer Survey - 2015.csv' | |
| from | |
| https://drive.google.com/file/d/0Bzd_CzYvUxE5U1NSWnA2SFVKX00/view |
| import scipy as sp | |
| from scipy.special import gammaln | |
| def log_marginal(p, n, alpha=2): | |
| """Log-marginal probability of `p` positive trials and `n` negative trials from a | |
| beta-binomial model with prior strength `alpha`. See | |
| http://www.cs.ubc.ca/~murphyk/Teaching/CS340-Fall06/reading/bernoulli.pdf | |
| for details. |
EXPLOSION
We live in a strange time.
Extraordinary events keep happening that undermine the stability of our world.
Suicide bombs, waves of refugees,
Donald Trump, Vladimir Putin, even Brexit.
| #!/usr/bin/env python3 | |
| # -*- coding: utf-8 -*- | |
| """ | |
| Created on Sun Feb 5 11:01:52 2017 | |
| @author: andyjones | |
| """ | |
| import scipy as sp | |
| import matplotlib.pyplot as plt |
| import requests | |
| import pandas as pd | |
| from io import BytesIO | |
| import matplotlib as mpl | |
| import matplotlib.pyplot as plt | |
| url = 'https://www.metoffice.gov.uk/hadobs/hadcet/cetdl1772on.dat' | |
| raw = pd.read_csv(BytesIO(requests.get(url).content), sep='\s+', header=None) | |
| raw.columns = ['year', 'day'] + list(range(1, 13)) |
| """ | |
| This is a standalone script for demonstrating some memory leaks that're troubling me. It's a torn-down version of | |
| the project I'm currently working on. | |
| To run this, you'll need panda3d, pandas and tqdm. You should be able to install these with | |
| ``` | |
| pip install panda3d pandas tqdm | |
| ``` | |
| You'll **also need to enable memory tracking**. Do that by setting `track-memory-usage 1` in `panda3d.__file__`'s | |
| `etc/Config.prc` file. Setting it anywhere else is too late! (It's a unique setting in that way - thanks rdb!) |
| """This script should be run on a machine with at least 2 GPUs and an MPS server running. You can launch an MPS daemon with | |
| ``` | |
| nvidia-cuda-mps-control -d | |
| ``` | |
| The script first uses `test_cuda` to verify a CUDA context can be created on each GPU. It then spawns two workers; a | |
| 'good' worker and a 'bad' worker. The workers collaborate through Pytorch's DataDistributedParallel module to calculate | |
| the gradient for a trivial computation. The 'good' worker carries out both the forward and backward pass, while the | |
| bad worker carries out the forward pass and then exits. This seems to lock up the MPS server, and any subsequent | |
| attempts to create CUDA contexts fail by hanging eternally. |