Trained by me using https://github.com/soumith/imagenet-multiGPU.torch, achieves 56.7% top1 center crop accuracy on ImageNet validation set. Tested here: https://github.com/szagoruyko/imagenet-validation.torch
Download link: https://yadi.sk/d/uX6id_yZoU8FC (247 MB)
Load as:
net = torch.load'./alexnet_torch.t7':unpack()
Input image size is 227.
Separate mean std per channel is saved with the network:
> print(net.transform)
{
mean :
{
1 : 0.48462227599918
2 : 0.45624044862054
3 : 0.40588363755159
}
std :
{
1 : 0.22889466674951
2 : 0.22446679341259
3 : 0.22495548344775
}
}
Can be loaded without CUDA support too.
nn.Sequential {
(1): nn.Concat {
input
|`-> (1): nn.Sequential {
| (1): nn.SpatialConvolution(3 -> 48, 11x11, 4,4, 2,2)
| (2): nn.SpatialBatchNormalization
| (3): nn.ReLU
| (4): nn.SpatialMaxPooling(3,3,2,2)
| (5): nn.SpatialConvolution(48 -> 128, 5x5, 1,1, 2,2)
| (6): nn.SpatialBatchNormalization
| (7): nn.ReLU
| (8): nn.SpatialMaxPooling(3,3,2,2)
| (9): nn.SpatialConvolution(128 -> 192, 3x3, 1,1, 1,1)
| (10): nn.SpatialBatchNormalization
| (11): nn.ReLU
| (12): nn.SpatialConvolution(192 -> 192, 3x3, 1,1, 1,1)
| (13): nn.SpatialBatchNormalization
| (14): nn.ReLU
| (15): nn.SpatialConvolution(192 -> 128, 3x3, 1,1, 1,1)
| (16): nn.SpatialBatchNormalization
| (17): nn.ReLU
| (18): nn.SpatialMaxPooling(3,3,2,2)
| }
|`-> (2): nn.Sequential {
(1): nn.SpatialConvolution(3 -> 48, 11x11, 4,4, 2,2)
(2): nn.SpatialBatchNormalization
(3): nn.ReLU
(4): nn.SpatialMaxPooling(3,3,2,2)
(5): nn.SpatialConvolution(48 -> 128, 5x5, 1,1, 2,2)
(6): nn.SpatialBatchNormalization
(7): nn.ReLU
(8): nn.SpatialMaxPooling(3,3,2,2)
(9): nn.SpatialConvolution(128 -> 192, 3x3, 1,1, 1,1)
(10): nn.SpatialBatchNormalization
(11): nn.ReLU
(12): nn.SpatialConvolution(192 -> 192, 3x3, 1,1, 1,1)
(13): nn.SpatialBatchNormalization
(14): nn.ReLU
(15): nn.SpatialConvolution(192 -> 128, 3x3, 1,1, 1,1)
(16): nn.SpatialBatchNormalization
(17): nn.ReLU
(18): nn.SpatialMaxPooling(3,3,2,2)
}
... -> output
}
(2): nn.Sequential {
(1): nn.View(9216)
(2): nn.Dropout(0.500000)
(3): nn.Linear(9216 -> 4096)
(4): nn.BatchNormalization
(5): nn.Threshold
(6): nn.Dropout(0.500000)
(7): nn.Linear(4096 -> 4096)
(8): nn.BatchNormalization
(9): nn.Threshold
(10): nn.Linear(4096 -> 1000)
}
}