In addition to this README
, this torrent contains 4 datasets:
Name | Image size (px) | Scene number | Size compressed (B) | Total size (B) |
---|---|---|---|---|
64.tar.xz |
64x64 | 80K | 9.8G | 19G |
128.tar.xz |
128x128 | 20K | 7.1G | 12G |
import numpy as np | |
from sklearn.utils.extmath import softmax | |
from sklearn.kernel_approximation import RBFSampler | |
from sklearn_extra.kernel_approximation import Fastfood | |
seed = 42 | |
rng = np.random.RandomState(seed) | |
D = 20 |
-- Xception model | |
-- a Torch7 implementation of: https://arxiv.org/abs/1610.02357 | |
-- E. Culurciello, October 2016 | |
require 'nn' | |
local nClasses = 1000 | |
function nn.SpatialSeparableConvolution(nInputPlane, nOutputPlane, kW, kH) | |
local block = nn.Sequential() | |
block:add(nn.SpatialConvolutionMap(nn.tables.oneToOne(nInputPlane), kW,kH, 1,1, 1,1)) |
'''This script goes along the blog post | |
"Building powerful image classification models using very little data" | |
from blog.keras.io. | |
It uses data that can be downloaded at: | |
https://www.kaggle.com/c/dogs-vs-cats/data | |
In our setup, we: | |
- created a data/ folder | |
- created train/ and validation/ subfolders inside data/ | |
- created cats/ and dogs/ subfolders inside train/ and validation/ | |
- put the cat pictures index 0-999 in data/train/cats |
1. Go to Sublime Text to: Tools -> Build System -> New Build System | |
and put the next lines: | |
{ | |
"cmd": ["python3", "-i", "-u", "$file"], | |
"file_regex": "^[ ]File \"(...?)\", line ([0-9]*)", | |
"selector": "source.python" | |
} | |
Then save it with a meaningful name like: python3.sublime-build |
##VGG16 model for Keras
This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.
It has been obtained by directly converting the Caffe model provived by the authors.
Details about the network architecture can be found in the following arXiv paper:
Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan, A. Zisserman
The following recipes are sampled from a trained neural net. You can find the repo to train your own neural net here: https://github.com/karpathy/char-rnn Thanks to Andrej Karpathy for the great code! It's really easy to setup.
The recipes I used for training the char-rnn are from a recipe collection called ffts.com And here is the actual zipped data (uncompressed ~35 MB) I used for training. The ZIP is also archived @ archive.org in case the original links becomes invalid in the future.
class SummedAreaTable(object): | |
def __init__(self, size, data): | |
""" | |
Just because I dislike a 2d array / list. | |
data should be a List of Integer. | |
""" | |
width, height = size | |
assert width * height == len(data), "invalid data length and or data size" | |
self.size = size | |
self.data = data |
--[[ json.lua | |
A compact pure-Lua JSON library. | |
The main functions are: json.stringify, json.parse. | |
## json.stringify: | |
This expects the following to be true of any tables being encoded: | |
* They only have string or number keys. Number keys must be represented as | |
strings in json; this is part of the json spec. |
# By Jake VanderPlas | |
# License: BSD-style | |
import matplotlib.pyplot as plt | |
import numpy as np | |
def discrete_cmap(N, base_cmap=None): | |
"""Create an N-bin discrete colormap from the specified input map""" |