Skip to content

Instantly share code, notes, and snippets.

View jekbradbury's full-sized avatar

James Bradbury jekbradbury

View GitHub Profile
@jblumenau
jblumenau / party_constituency_vote_shares.csv
Created June 9, 2017 02:40
Constituency-level vote share predictions from YouGov MRP GE2017 model
We can make this file beautiful and searchable if this error is corrected: Unclosed quoted field in line 6.
"","code","constituency","Con","Lab","LD","UKIP","Green","SNP","PC","Other","Con_lo","Lab_lo","LD_lo","UKIP_lo","Green_lo","SNP_lo","PC_lo","Other_lo","Con_hi","Lab_hi","LD_hi","UKIP_hi","Green_hi","SNP_hi","PC_hi","Other_hi"
"1","E14000530","Aldershot","51.8","29.1","11.0","5.5","2.6","0.0","0.0","0.0","44.4","22.2","6.7","3.0","1.3","0.0","0.0","0.0","58.4","36.4","16.6","8.5","4.8","0.0","0.0","0.0"
"2","E14000531","Aldridge-Brownhills","62.3","29.3","4.8","0.0","0.0","0.0","0.0","3.5","53.9","21.4","2.4","0.0","0.0","0.0","0.0","0.6","69.3","36.5","8.4","0.0","0.0","0.0","0.0","10.8"
"3","E14000532","Altrincham and Sale West","48.6","35.2","11.1","0.0","1.9","0.0","0.0","3.2","41.3","28.5","6.7","0.0","0.8","0.0","0.0","0.6","56.0","42.0","17.0","0.0","3.8","0.0","0.0","8.3"
"4","E14000533","Amber Valley","51.3","40.7","3.2","0.0","2.2","0.0","0.0","2.7","43.9","33.7","1.5","0.0","0.8","0.0","0.0","0.3","58.0","47.4","5.7","0.0","4.7","0.0","0.0","8.8"
"5","E14000534","Arundel and South Downs","58.7","21.
@jblumenau
jblumenau / party_constituency_vote_shares_may_19.csv
Created June 12, 2017 16:47
party_constituency_vote_shares_may_19
We can make this file beautiful and searchable if this error is corrected: Unclosed quoted field in line 6.
"","code","constituency","Con","Lab","LD","UKIP","Green","SNP","PC","Other","Con_hi","Lab_hi","LD_hi","UKIP_hi","Green_hi","SNP_hi","PC_hi","Other_hi","Con_lo","Lab_lo","LD_lo","UKIP_lo","Green_lo","SNP_lo","PC_lo","Other_lo"
"1","E14000530","Aldershot","55.0","25.5","12.3","4.7","2.5","0.0","0.0","0.0","61.3","32.8","17.1","7.1","4.3","0.0","0.0","0.0","48.7","19.4","8.8","2.7","1.4","0.0","0.0","0.0"
"2","E14000531","Aldridge-Brownhills","62.8","26.8","5.4","0.0","0.0","0.0","0.0","5.0","68.2","31.9","8.3","0.0","0.0","0.0","0.0","13.9","56.9","21.2","3.4","0.0","0.0","0.0","0.0","1.0"
"3","E14000532","Altrincham and Sale West","52.9","33.7","10.0","0.0","2.2","0.0","0.0","1.1","59.2","41.4","14.8","0.0","3.9","0.0","0.0","3.3","47.5","26.7","6.2","0.0","1.2","0.0","0.0","0.3"
"4","E14000533","Amber Valley","55.3","34.5","4.7","0.0","1.8","0.0","0.0","3.7","61.5","42.1","8.1","0.0","3.5","0.0","0.0","9.9","48.6","28.2","2.4","0.0","0.7","0.0","0.0","0.8"
"5","E14000534","Arundel and South Downs","60.9","17.
@JonathanRaiman
JonathanRaiman / plan.py
Last active November 27, 2018 02:25
Dali graph transformation Plan
"""
Micro-dali JIT Plan:
- contains gemm, operator fusion, elementwise/reduction ops.
- supports tensordot
- supports 'jit'
- supports conversion from gemm + im2col to conv2d (NHWC)
- supports 'optimization' passes
- supports 'implementation' registries for specialization
(e.g. int vs float)
@MInner
MInner / gpu_profile.py
Created September 12, 2017 16:11
A script to generate per-line GPU memory usage trace. For more meaningful results set `CUDA_LAUNCH_BLOCKING=1`.
import datetime
import linecache
import os
import pynvml3
import torch
print_tensor_sizes = True
last_tensor_sizes = set()
gpu_profile_fn = f'{datetime.datetime.now():%d-%b-%y-%H:%M:%S}-gpu_mem_prof.txt'
import torch
from torch.autograd import Variable
leaves = [Variable(torch.zeros(5, 5), requires_grad=True) for _ in range(10)]
intermediates = [l + i for i, l in enumerate(leaves)]
loss = sum(v * i for i, v in enumerate(intermediates)).sum()
# define a helper for dividing intermediates into groups
def group(l, group_size):
"""Groups l into chunks of size group_size.
@simonbyrne
simonbyrne / fisher.jl
Created October 4, 2017 05:25
Computing Fisher information via forward-mode automatic differentiation
using Distributions
import ForwardDiff: Dual, value, partials
@generated function get_values(a::NTuple{N}) where {N}
return ForwardDiff.tupexpr(i -> :(value(a[$i])),N)
end
ForwardDiff.value(p::ForwardDiff.Partials) =
ForwardDiff.Partials(get_values(p.values))

Status quo

Currently the AbstractArray type hierarchy has three major subtype trees:

  • DenseArray
  • AbstractSparseArray
  • AbstractRange

In addition, we have the StridedArray typealias, which effectively “adds” strided SubArrays and ReshapedArrays as pseudo-subtypes of DenseArrays.

We also have the IndexStyle trait.

@zou3519
zou3519 / hang.py
Created October 19, 2017 02:17
This script gets stuck, but only on some machines...
from torch import nn
from torch.autograd import Variable
import torch
l = nn.Linear(5,5).cuda()
pl = nn.DataParallel(l)
print("Checkpoint 1")
a = Variable(torch.rand(5,5).cuda(), requires_grad=True)
print("Checkpoint 2")
print(pl(a)) # Here it gets stuck
@apaszke
apaszke / Rop.py
Last active January 16, 2023 07:20
def Rop(y, x, v):
"""Computes an Rop.
Arguments:
y (Variable): output of differentiated function
x (Variable): differentiated input
v (Variable): vector to be multiplied with Jacobian from the right
"""
w = torch.ones_like(y, requires_grad=True)
return torch.autograd.grad(torch.autograd.grad(y, x, w), w, v)
# To ensure correct alignment (e.g. for an 80-bit type)
struct StrWrap{T}
value::T
end
function unsafe_reinterpret(T, a::A) where {A}
if sizeof(T) <= sizeof(A)
r = Ref(a)
Base.@gc_preserve r begin
u = convert(Ptr{T}, Base.unsafe_convert(Ptr{A}, r))