As configured in my dotfiles.
start new:
tmux
start new with session name:
As configured in my dotfiles.
start new:
tmux
start new with session name:
| Below are the Big O performance of common functions of different Java Collections. | |
| List | Add | Remove | Get | Contains | Next | Data Structure | |
| ---------------------|------|--------|------|----------|------|--------------- | |
| ArrayList | O(1) | O(n) | O(1) | O(n) | O(1) | Array | |
| LinkedList | O(1) | O(1) | O(n) | O(n) | O(1) | Linked List | |
| CopyOnWriteArrayList | O(n) | O(n) | O(1) | O(n) | O(1) | Array |
Here's a simple implementation of bilinear interpolation on tensors using PyTorch.
I wrote this up since I ended up learning a lot about options for interpolation in both the numpy and PyTorch ecosystems. More generally than just interpolation, too, it's also a nice case study in how PyTorch magically can put very numpy-like code on the GPU (and by the way, do autodiff for you too).
For interpolation in PyTorch, this open issue calls for more interpolation features. There is now a nn.functional.grid_sample() feature but at least at first this didn't look like what I needed (but we'll come back to this later).
In particular I wanted to take an image, W x H x C, and sample it many times at different random locations. Note also that this is different than upsampling which exhaustively samples and also doesn't give us fle
| #!/usr/bin/env python | |
| import math | |
| import matplotlib.pyplot as plt | |
| import torch | |
| import torch.nn as nn | |
| from sklearn.datasets import make_moons | |
| from torch import Tensor | |
| from tqdm import tqdm |