Updated 4/11/2018
Here's my experience of installing the NVIDIA CUDA kit 9.0 on a fresh install of Ubuntu Desktop 16.04.4 LTS.
def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')): | |
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering | |
Args: | |
logits: logits distribution shape (vocabulary size) | |
top_k >0: keep only top k tokens with highest probability (top-k filtering). | |
top_p >0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering). | |
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) | |
""" | |
assert logits.dim() == 1 # batch size 1 for now - could be updated for more but the code would be less clear | |
top_k = min(top_k, logits.size(-1)) # Safety check |
$ git clone [email protected]:xxxxx/xxxx.git my-awesome-proj | |
Cloning into 'my-awesome-proj'... | |
ssh: connect to host github.com port 22: Connection timed out | |
fatal: Could not read from remote repository. | |
$ # This should also timeout | |
$ ssh -T [email protected] | |
ssh: connect to host github.com port 22: Connection timed out | |
$ # but this might work |
model.zero_grad() # Reset gradients tensors | |
for i, (inputs, labels) in enumerate(training_set): | |
predictions = model(inputs) # Forward pass | |
loss = loss_function(predictions, labels) # Compute loss function | |
loss = loss / accumulation_steps # Normalize our loss (if averaged) | |
loss.backward() # Backward pass | |
if (i+1) % accumulation_steps == 0: # Wait for several backward steps | |
optimizer.step() # Now we can do an optimizer step | |
model.zero_grad() # Reset gradients tensors | |
if (i+1) % evaluation_steps == 0: # Evaluate the model when we... |
import torch | |
from torchvision import datasets | |
class ImageFolderWithPaths(datasets.ImageFolder): | |
"""Custom dataset that includes image file paths. Extends | |
torchvision.datasets.ImageFolder | |
""" | |
# override the __getitem__ method. this is the method that dataloader calls | |
def __getitem__(self, index): |
# References: | |
# [1] Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, Fukui et al., https://arxiv.org/abs/1606.01847 | |
# [2] Compact Bilinear Pooling, Gao et al., https://arxiv.org/abs/1511.06062 | |
# [3] Fast and Scalable Polynomial Kernels via Explicit Feature Maps, Pham and Pagh, https://chbrown.github.io/kdd-2013-usb/kdd/p239.pdf | |
# [4] Fastfood — Approximating Kernel Expansions in Loglinear Time, Le et al., https://arxiv.org/abs/1408.3060 | |
# [5] Original implementation in Caffe: https://github.com/gy20073/compact_bilinear_pooling | |
# TODO: migrate to use of new native complex64 types | |
# TODO: change strided x coo matmul to torch.matmul(): M[sparse_coo] @ M[strided] -> M[strided] |
Updated 4/11/2018
Here's my experience of installing the NVIDIA CUDA kit 9.0 on a fresh install of Ubuntu Desktop 16.04.4 LTS.
I fell in love with CoffeeScript a couple of years ago. Javascript has always seemed something of an interesting curiosity to me and I was happy to see the meteoric rise of Node.js, but coming from a background of Python I really preferred a cleaner syntax.
In any fast moving community it is inevitable that things will change, and so today we see a big shift toward ES6, the new version of Javascript. It incorporates a handful of the nicer features from CoffeeScript and is usable today through tools like Babel. Here are some of my thoughts and issues on moving away from CoffeeScript in favor of ES6.
While reading I suggest keeping open a tab to Babel's learning ES6 page. The examples there are great.
Holy punctuation, Batman! Say goodbye to your whitespace and hello to parenthesis, curly braces, and semicolons again. Even with the advanced ES6 syntax you'll find yourself writing a lot more punctuatio
/* Solarized Dark | |
For use with Jekyll and Pygments | |
http://ethanschoonover.com/solarized | |
SOLARIZED HEX ROLE | |
--------- -------- ------------------------------------------ | |
base03 #002b36 background | |
base01 #586e75 comments / secondary content |
# coding=UTF-8 | |
from __future__ import division | |
import nltk | |
import re | |
import requests | |
# Add your freebase key here | |
# If you don't have one, register at https://code.google.com/apis/console | |
FREEBASE_KEY = "" |