Skip to content

Instantly share code, notes, and snippets.

View soumith's full-sized avatar

Soumith Chintala soumith

View GitHub Profile
@soumith
soumith / out.log
Created February 12, 2018 21:57 — forked from anonymous/out.log
[WARNING]: No mapping options supplied. 'Naive' options will be used which might fail compilation
[WARNING]: Autotuning results won't be cached. 'cache' option is not specified
[WARNING]: Using naive options for autotuning
template<typename T> inline __device__ T floord(T n, T d) {
return n < 0 ? - (-n + d - 1)/d : n / d;
}
// Halide type handling
typedef int int32;
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
BATCH_SIZE = 1
INPUT_DIM = 1
OUTPUT_DIM = 1
DTYPE = np.float32
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable as Var
class TreeDecoder(nn.Module):
NODE_DICT, NODE_LIST, NODE_STR = 0, 1, 2
def __init__(self, input_size, max_key, max_ident, max_depth, max_length):
super(TreeDecoder, self).__init__()
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
import torch
import torch.nn as nn
import torch.nn.parallel
class DCGAN_D(nn.Container):
def __init__(self, isize, nz, nc, ndf, ngpu, n_extra_layers=0):
super(DCGAN_D, self).__init__()
self.ngpu = ngpu
assert isize % 16 == 0, "isize has to be a multiple of 16"
import torch.multiprocessing as mp
from torch.multiprocessing import Semaphore
import sys
if sys.version_info[0] == 3:
Barrier = mp.Barrier
else: # version 2
# from http://stackoverflow.com/a/26703365/117844
class Barrier:
$ CUDA_VISIBLE_DEVICES=0 th -lcutorch -e "cutorch.test('sigmoid1')"
seed: 1471924845
Running 1 test
1/1 sigmoid1 ............................................................ [WAIT]torch.CudaTensor
input CPU: 0.31726333498955
input GPU: 0.31726333498955
Result CPU: 0.57865715026855
Result GPU: 0.5786572098732
torch.CudaDoubleTensor
input CPU: 0.31726332521066
[
{
"oC":32,
"name":"conv",
"strideH":2,
"kW":3,
"iC":3,
"padW":0,
"strideW":2,
"padH":0,
@soumith
soumith / multiple_learning_rates.lua
Created May 26, 2016 21:35 — forked from farrajota/multiple_learning_rates.lua
Example code for how to set different learning rates per layer. Note that when calling :parameters(), the weights and bias of a given layer are separate, consecutive tensors. Therefore, when calling :parameters(), a network with N layers will output a table with N*2 tensors, where the i'th and i'th+1 tensors belong to the same layer.
-- multiple learning rates per network. Optimizes two copies of a model network and checks if the optimization steps (2) and (3) produce the same weights/parameters.
require 'torch'
require 'nn'
require 'optim'
torch.setdefaulttensortype('torch.FloatTensor')
-- (1) Define a model for this example.
local model = nn.Sequential()
model:add(nn.Linear(10,20))
-- Torch Android demo script
-- Script: main.lua
-- Copyright (C) 2013 Soumith Chintala
require 'torch'
require 'cunn'
require 'nnx'
require 'dok'
require 'image'