Skip to content

Instantly share code, notes, and snippets.

View jcreinhold's full-sized avatar

Jacob Reinhold jcreinhold

View GitHub Profile
@jcreinhold
jcreinhold / test_zero_init.py
Created March 26, 2021 19:35
Test how zero init affects weights and bias
#!/usr/bin/env python
from torch import nn
from torch.nn import functional as F
import torch
test_weight_zero = True
test_bias_zero = False
n_steps = 3
############################### 1-layer conv network ###########################
@jcreinhold
jcreinhold / uncertain_multiclass.py
Last active March 26, 2021 14:33
uncertainty estimation for multiple class classification
from typing import Tuple
import torch
from torch import nn
import torch.nn.functional as F
activation = nn.ReLU
class UncertainLinear(nn.Module):
@jcreinhold
jcreinhold / tech_revenue.ipynb
Created March 8, 2021 01:57
Combined revenue for big tech companies second-half of 2021
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@jcreinhold
jcreinhold / tech_revenue.csv
Created March 7, 2021 23:06
Revenue for Alphabet, Amazon, Apple, Facebook, Microsoft
Year_Half Alphabet Amazon Apple Facebook Microsoft
2020-H2 1.03071E+11 2.217E+11 1.76137E+11 49542000000 80230000000
2014-H1 31,375,000,000 39,081,000,000 83,078,000,000 5,412,000,000 43,785,000,000
2013-H1 26,058,000,000 31,774,000,000 78,926,000,000 3,271,000,000 40,385,000,000
2015-H2 40,004,000,000 61,105,000,000 127,373,000,000 10,342,000,000 44,175,000,000
2017-H1 50,760,000,000 73,669,000,000 98,304,000,000 17,353,000,000 48,817,000,000
2012-H1 22,452,000,000 26,019,000,000 74,209,000,000 2,242,000,000 35,466,000,000
2020-H1 79,456,000,000 164,364,000,000 117,998,000,000 36,424,000,000 73,054,000,000
2012-H2 27,723,000,000 35,074,000,000 90,478,000,000 2,847,000,000 37,464,000,000
2014-H2 34,626,000,000 49,907,000,000 116,722,000,000 7,054,000,000 49,671,000,000
@jcreinhold
jcreinhold / tiramisu3d_only_flair.py
Created March 6, 2021 17:07
Neural network (3D Tiramisu) for FLAIR-based T2-lesion segmentation
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
3D Tiramisu network for FLAIR-based T2-lesion segmentation
This code is unfortunately a huge mess. But, given the CSV files with
the appropriate setup, you can run the below command (starting with
"python -u ...") to generate the network used to generate the
segmentation results in the paper:
"A Structural Causal Model for MR Images of Multiple Sclerosis"
@jcreinhold
jcreinhold / it_quiz_stationary.py
Last active November 14, 2020 00:06
Information theory homework quiz solution for stationary distribution
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
below we calculate and print the stationary dist.
for the transition matrix P with variable p
Note that A is I - P', where P' is P with an
additional row of 1's to represent the condition
that the stationary distribution must add to 1.
@jcreinhold
jcreinhold / gan_toy_examples.ipynb
Created September 20, 2020 18:09
Fitting a toy distribution with a variety of GANs
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@jcreinhold
jcreinhold / reading_memory_dnn.ipynb
Last active August 10, 2021 06:27
Reading memory in a DNN with ST Gumbel-softmax
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@jcreinhold
jcreinhold / attention.py
Created July 15, 2020 14:53
Grid Attention Block in PyTorch
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
grid attention blocks for gated attention networks
Based on: https://github.com/ozan-oktay/Attention-Gated-Networks
Author: Jacob Reinhold ([email protected])
"""
__all__ = ['GridAttentionBlock2d',
'GridAttentionBlock3d']
@jcreinhold
jcreinhold / tiramisu.py
Last active January 28, 2022 17:49
Tiramisu 2D/3D in PyTorch
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
PyTorch implementation of the Tiramisu network architecture [1]
(2D) Implementation based on [2].
Changes from [2] include:
1) removal of bias from conv layers,
2) change zero padding to replication padding,
3) use of GELU for default activation,