-
-
Save kukuruza/03731dc494603ceab0c5 to your computer and use it in GitHub Desktop.
from math import sqrt | |
def put_kernels_on_grid (kernel, pad = 1): | |
'''Visualize conv. filters as an image (mostly for the 1st layer). | |
Arranges filters into a grid, with some paddings between adjacent filters. | |
Args: | |
kernel: tensor of shape [Y, X, NumChannels, NumKernels] | |
pad: number of black pixels around each filter (between them) | |
Return: | |
Tensor of shape [1, (Y+2*pad)*grid_Y, (X+2*pad)*grid_X, NumChannels]. | |
''' | |
# get shape of the grid. NumKernels == grid_Y * grid_X | |
def factorization(n): | |
for i in range(int(sqrt(float(n))), 0, -1): | |
if n % i == 0: | |
if i == 1: print('Who would enter a prime number of filters') | |
return (i, int(n / i)) | |
(grid_Y, grid_X) = factorization (kernel.get_shape()[3].value) | |
print ('grid: %d = (%d, %d)' % (kernel.get_shape()[3].value, grid_Y, grid_X)) | |
x_min = tf.reduce_min(kernel) | |
x_max = tf.reduce_max(kernel) | |
kernel = (kernel - x_min) / (x_max - x_min) | |
# pad X and Y | |
x = tf.pad(kernel, tf.constant( [[pad,pad],[pad, pad],[0,0],[0,0]] ), mode = 'CONSTANT') | |
# X and Y dimensions, w.r.t. padding | |
Y = kernel.get_shape()[0] + 2 * pad | |
X = kernel.get_shape()[1] + 2 * pad | |
channels = kernel.get_shape()[2] | |
# put NumKernels to the 1st dimension | |
x = tf.transpose(x, (3, 0, 1, 2)) | |
# organize grid on Y axis | |
x = tf.reshape(x, tf.stack([grid_X, Y * grid_Y, X, channels])) | |
# switch X and Y axes | |
x = tf.transpose(x, (0, 2, 1, 3)) | |
# organize grid on X axis | |
x = tf.reshape(x, tf.stack([1, X * grid_X, Y * grid_Y, channels])) | |
# back to normal order (not combining with the next step for clarity) | |
x = tf.transpose(x, (2, 1, 3, 0)) | |
# to tf.image_summary order [batch_size, height, width, channels], | |
# where in this case batch_size == 1 | |
x = tf.transpose(x, (3, 0, 1, 2)) | |
# scaling to [0, 255] is not necessary for tensorboard | |
return x | |
# | |
# ... and somewhere inside "def train():" after calling "inference()" | |
# | |
# Visualize conv1 kernels | |
with tf.variable_scope('conv1'): | |
tf.get_variable_scope().reuse_variables() | |
weights = tf.get_variable('weights') | |
grid = put_kernels_on_grid (weights) | |
tf.image.summary('conv1/kernels', grid, max_outputs=1) | |
Thanks for your code, @kitovyj! Makes sense that the bckground should be black. I'll update the gist when I have a chance to test it
Guys, awesome work! I can confirm @kitovyj code works. Actually, the original version on line 1 contains a typo (tuple in the arguments).
Is there a way to show the original image right next to an image after the convolution layer?
Updated:
- grayscale or color filters (thanks to @kitovyj)
- optimal grid shape is computed automatically
How to show it in a tensorboard ? what is the command ?
@JiteshPshah. Start your tensorboard as usual (see TF tutorials), then go to the image tab. It should be there.
@kukuruza the title is visualize convolutional features (conv1), but the code is about kernel which is not feature. forgive me about the detail information.
@yaxingwang True.
This doesn't enforce that the last number be in the range 1-3 which means that it can cause errors.
This was extremely useful guys.
But is there any way I can view other conv layers in my Gen & discriminator
as when I try I get the error:
InvalidArgumentError (see above for traceback): Tensor must be 4-D with last dim 1, 3, or 4, not [1,112,224,256]
[[Node: generator/conv_tranpose1/Conv_filter_2 = ImageSummary[T=DT_FLOAT, bad_color=Tensor<type: uint8 shape: [4] values: 255 0 0...>, max_images=1, _device="/job:localhost/replica:0/task:0/cpu:0"](generator/conv_tranpose1/Conv_filter_2/tag, generator/conv_tranpose1/transpose_3)]]
hope i could help impove this snippet.
@Mithun-Aggarwal I also encountered the same problem, do you have a solution?
@Mithun-Aggarwal, @2016gary. Sorry, I didn't check the comments in a while.
It does not make much sense to visualize conv layer 2+, except maybe for checking for dead filters.
Besides, humans can visualize 3 channels as RGB, but will not be able to interpret 64 channels.
Now, if you're really into it, here's what you can do. Let's say you have 32 input channels on the 2nd layer out 64 output channels. Instead of having one grid with 64 color squares, make 64 grids with 32 grayscale channels. Each grid == a filter. Again, it would be be hard to interpret, but easy to spot dead filters.
I may implement that some time in the future.
The code by @kitovyj should use tf.stack, instead of tf.pack.
raise ValueError("Variable %s does not exist, or was not created with "
"tf.get_variable(). Did you mean to set "
"reuse=tf.AUTO_REUSE in VarScope?" % name)
Create the tensor to initialize the variable with default value.
ValueError: Variable conv1/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
I got this error. What should I do?
I modified your code a liitle to make it compatible with arbitrary channel number and to get rid of the variable background color(normalizatoin is done berofe the padding):