A solid Git pull request workflow will keep you from having issues when
contributing work to projects of interest. At the core, the idea is
simple: keep a local master
branch simply as a means of getting the
latest official updates from the project's official Git repo so that you
can create new branches from it to work on your desired changes. Then,
always open PRs from these new branches, and once the PR is merged into
the official Git repo, you can simply move back to master
, pull those
official changes, and then checkout a brand new branch for the next item
you wish to work on.
# NumPy memory experiments | |
import numpy as np | |
a = np.random.rand(1, 2, 3) | |
b = np.asarray(a) # no copy! | |
c = np.array(a) # copy! | |
# The array interface is a map that has a 'data' key returning | |
# a tuple containing a pointer to the memory address and a | |
# return flag. |
rsync -auzPhv --delete --exclude-from=rsync_exclude.txt SOURCE/ DEST/ -n
-a
->--archive
; recursively sync, preserving symbolic links and all file metadata-u
->--update
; skip files that are newer on the receiver; sometimes this is inaccurate (due to Git, I think...)-z
->--compress
; compression-P
->--progress
+--partial
; show progress bar and resume interupted transfers-h
->--human-readable
; human-readable format-v
->--verbose
; verbose output
-n
->--dry-run
; dry run; use this to test, and then remove to actually execute the sync
This gist aims to explore interesting scenarios that may be encountered while training machine learning models.
Let's imagine a scenario where the validation accuracy and loss both begin to increase. Intuitively, it seems like this scenario should not happen, since loss and accuracy seem like they would have an inverse relationship. Let's explore this a bit in the context of a binary classification problem in which a model parameterizes a Bernoulli distribution (i.e., it outputs the "probability" of the true class) and is trained with the associated negative log likelihood as the loss function (i.e., the "logistic loss" == "log loss" == "binary cross entropy").
Imagine that when the model is predicting a probability of 0.99 for a "true" class, the model is both correct (assuming a decision threshold of 0.5) and has a low loss since it can't do much better for that example. Now, imagine that the model
import numpy as np | |
# the function | |
def f_of_x(X, w): | |
n,d = X.shape | |
X_dot_w = np.dot(X,w) | |
y = np.zeros(n) | |
# the inner product randomly goes through a sin | |
# or a cos | |
cos_flag = np.random.randn(n) < 0.0 |
latexmk -pdf -pvc myfile.tex
In neovim, the following command will open up a separate terminal in a small split window to compile the current file:
:sp | resize 5 | term latexmk -pdf -pvc %
- Always include
usepackage[utf8]{inputenc}
within every document.
"""Example ImageNet-style resnet training scenario with synthetic data. | |
Author: Mike Dusenberry | |
""" | |
import argparse | |
import numpy as np | |
import tensorflow as tf | |
# args |
"""Example ImageNet-style resnet training scenario with synthetic data. | |
Author: Mike Dusenberry | |
""" | |
import argparse | |
import sys | |
import tensorflow as tf | |