-
-
Save liuhh02/2bd3f5b1ced9142d728d207f7828043e to your computer and use it in GitHub Desktop.
""" | |
scale_images.py | |
Function to scale any image to the pixel values of [-1, 1] for GAN input. | |
Author: liuhh02 https://machinelearningtutorials.weebly.com/ | |
""" | |
from PIL import Image | |
import numpy as np | |
from os import listdir | |
def normalize(arr): | |
''' Function to scale an input array to [-1, 1] ''' | |
arr_min = arr.min() | |
arr_max = arr.max() | |
# Check the original min and max values | |
print('Min: %.3f, Max: %.3f' % (arr_min, arr_max)) | |
arr_range = arr_max - arr_min | |
scaled = np.array((arr-arr_min) / float(arr_range), dtype='f') | |
arr_new = -1 + (scaled * 2) | |
# Make sure min value is -1 and max value is 1 | |
print('Min: %.3f, Max: %.3f' % (arr_new.min(), arr_new.max())) | |
return arr_new | |
# path to folder containing images | |
path = './directory/to/image/folder/' | |
# loop through all files in the directory | |
for filename in listdir(path): | |
# load image | |
image = Image.open(path + filename) | |
# convert to numpy array | |
image = np.array(image) | |
# scale to [-1,1] | |
image = normalize(image) |
So an update for people who might come looking here. Got the PyTorch pix2pix implementation to work with a few changes. I ended up doing all image pre-processing and post-processing outside the PyTorch implementation so that I just inserted my [-1,1] normalized images and got out [-1,1] normalized output images. You can probably do this inside the pix2pix code if you want.
In util.py
the function tensor2im
transforms a tensor to an image array BUT it also rescales your image since it assumes your images only have values from 0-255 which is usually not the case for tifs. So change the line image_numpy = (np.transpose(image_numpy, (1, 2, 0))+1) / 2.0 * 255.0
to image_numpy = (np.transpose(image_numpy, (1, 2, 0)))
OR put in your own rescaling/post-processing. Also remove the lines
#if image_numpy.shape[0] == 1: # grayscale to RGB
# image_numpy = np.tile(image_numpy, (3, 1, 1))
I did the rescaling afterwards in a separate .py script which does the reverse of the normalization from @liuhh02
In the save_image
function change #Original to #Changed
# Original:
# image_pil = Image.fromarray(image_numpy)
# h, w, _ = image_numpy.shape
# Changed
image_numpy = np.squeeze(image_numpy) #Remove empty dimension
image_pil = Image.fromarray(image_numpy, mode='F')
h, w = image_numpy.shape
image_path = image_path.replace(".png", ".tif") # Replace .png ending with .tif ending
In aligned_dataset.py
your input image is put into a tensor and several transforms are applied to it.
Here you only want to change the way the image is loaded since you dont want it to convert into RGB.
AB = Image.open(AB_path).convert('RGB')
Changed to
AB = Image.open(AB_path)
Lastly in the base_dataset.py
you want to change the function get_transform
.
You want to remove if grayscale:
. The Github page for the transforms.Grayscale is currently down while writing this, but if I remember correctly it tries to scale an [0,255] range image to [0,1] grayscale. And if you have already normalized to [-1,1] this transformation just reduced all your values to 0.
#if grayscale:
# transform_list.append(transforms.Grayscale(1))
You also want to make changes to if convert:
. If your images are already normalized you need to remove the normalization here. OR insert the normalization code from @liuhh02
if convert:
transform_list += [transforms.ToTensor()]
if grayscale:
transform_list += [transforms.Normalize((0.5,), (0.5,))]
else:
transform_list += [transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
Should be changed to
if convert:
transform_list += [transforms.ToTensor()]
I think these were all the changes I made.
Best,
Johan
@JohanRaniseth Amazing work! Thank you for sharing it here :)
Hello @liuhh02, would it be possible to identify in your code what parameters I would need to change should I require a different scale of values other than [-1 1].
Hello @liuhh02, would it be possible to identify in your code what parameters I would need to change should I require a different scale of values other than [-1 1].
Sure! This line of code scales the values to [0, 1]:
scaled = np.array((arr-arr_min) / float(arr_range), dtype='f')
So you may modify
arr_new = -1 + (scaled * 2)
accordingly to scale it to the range you require.
@liuhh02 Nice approach, thank you for your input
Thanks to @liuhh02 and @JohanRaniseth for the work. For anyone visiting in the future: you have to normalize based on the min and max of your whole (training) dataset, not every image individually like in the code provided above (see #950). If you normalize individually, you will lose information and be unable to reverse the process later.
I also had to make another change to the tensor2im function in util.py by changing this line to just return image_numpy.
@JohanRaniseth Thanks for pointing it out!
I didn't manage to get the pix2pix to work out-of-the-box using the pytorch source code. Instead, I referred to Jason Brownlee's code for a keras implementation of pix2pix here. The code worked wonderfully for me, with only minor changes like changing the image_shape under define_generator from (256,256,3) to (256,256,1). Aside from that, you also need to change the scaling of the image under the function load_real_samples, from
X1 = (X1 - 127.5) / 127.5
toX1 = normalize(X1)
(from the function given in this gist).Give it a try and tell me how it goes!