Skip to content

Instantly share code, notes, and snippets.

@titu1994
Last active January 23, 2018 17:09
Show Gist options
  • Save titu1994/6de7e693126c3b5fcc5edc98faa46ea1 to your computer and use it in GitHub Desktop.
Save titu1994/6de7e693126c3b5fcc5edc98faa46ea1 to your computer and use it in GitHub Desktop.
Attempt at implementing a smaller version of "Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections"(http://arxiv.org/abs/1606.08921)
from keras.models import Model
from keras.layers import Input
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import Deconvolution2D
from keras.layers import merge
import numpy as np
'''
Attempt at implementing a smaller version of "Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections"
(http://arxiv.org/abs/1606.08921)
'''
batch_size = 128
channels = 3
height = 33
width = 33
n1 = 64
n2 = 128 # Compiles and fits properly if n1 == n2
init = Input(shape=(channels, height, width))
level1_1 = Convolution2D(n1, 3, 3, activation='relu', border_mode='same')(init)
level2_1 = Convolution2D(n1, 3, 3, activation='relu', border_mode='same')(level1_1)
level3_1 = Convolution2D(n2, 3, 3, activation='relu', border_mode='same')(level2_1)
level4_1 = Convolution2D(n2, 3, 3, activation='relu', border_mode='same')(level3_1)
level4_2 = Deconvolution2D(n2, 3, 3, activation='relu', output_shape=(None, n2, height, width), border_mode='same')(level4_1)
level3_2 = Deconvolution2D(n2, 3, 3, activation='relu', output_shape=(None, n2, height, width), border_mode='same')(level4_2)
level3 = merge([level3_1, level3_2], mode='sum')
level2_2 = Deconvolution2D(n1, 3, 3, activation='relu', output_shape=(None, n1, height, width), border_mode='same')(level3)
level1_2 = Deconvolution2D(n1, 3, 3, activation='relu', output_shape=(None, n1, height, width), border_mode='same')(level2_2)
level1 = merge([level1_1, level1_2], mode='sum')
decoded = Convolution2D(channels, 5, 5, activation='linear', border_mode='same')(level1)
model = Model(init, decoded)
model.compile(optimizer='adam', loss='mse')
np.random.seed(1)
dataX = np.random.random((200, channels, height, width))
dataY = np.random.random((200, channels, height, width))
model.fit(dataX, dataY, batch_size=batch_size, nb_epoch=10) # Crashes here if n1 != n2
@titu1994
Copy link
Author

titu1994 commented Aug 1, 2016

Output

Using Theano backend.
Using gpu device 0: GeForce GTX 980M (CNMeM is enabled with initial size: 80.0% of memory, cuDNN 5103)
Epoch 1/10
Traceback (most recent call last):
File "D:\Users\Yue\Anaconda3\lib\site-packages\theano\compile\function_module.py", line 866, in call
self.fn() if output_subset is None else
ValueError: GpuDnnConv images and kernel must have the same stack size

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:/Users/Yue/PycharmProjects/ImageSuperResolution/conv_auto_encoder.py", line 44, in

File "D:\Users\Yue\Anaconda3\lib\site-packages\keras\engine\training.py", line 1107, in fit
callback_metrics=callback_metrics)
File "D:\Users\Yue\Anaconda3\lib\site-packages\keras\engine\training.py", line 825, in _fit_loop
outs = f(ins_batch)
File "D:\Users\Yue\Anaconda3\lib\site-packages\keras\backend\theano_backend.py", line 643, in call
return self.function(*inputs)
File "D:\Users\Yue\Anaconda3\lib\site-packages\theano\compile\function_module.py", line 879, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "D:\Users\Yue\Anaconda3\lib\site-packages\theano\gof\link.py", line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "D:\Users\Yue\Anaconda3\lib\site-packages\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "D:\Users\Yue\Anaconda3\lib\site-packages\theano\compile\function_module.py", line 866, in call
self.fn() if output_subset is None else
ValueError: GpuDnnConv images and kernel must have the same stack size

Apply node that caused the error: GpuDnnConvGradW{algo='none', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0})
Toposort index: 321
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D), <theano.gof.type.CDataType object at 0x0000000061DAD9E8>, Scalar(float32), Scalar(float32)]
Inputs shapes: [(128, 128, 33, 33), (128, 128, 33, 33), (128, 64, 3, 3), 'No shapes', (), ()]
Inputs strides: [(139392, 1089, 33, 1), (139392, 1089, 33, 1), (576, 9, 3, 1), 'No strides', (), ()]
Inputs values: ['not shown', 'not shown', 'not shown', <capsule object NULL at 0x00000000001369F0>, 1.0, 0.0]
Inputs name: ('image', 'grad', 'output', 'descriptor', 'alpha', 'beta')

Outputs clients: [[GpuDimShuffle{1,0,2,3}(GpuDnnConvGradW{algo='none', inplace=True}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

Process finished with exit code 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment