-
-
Save liuhh02/2bd3f5b1ced9142d728d207f7828043e to your computer and use it in GitHub Desktop.
""" | |
scale_images.py | |
Function to scale any image to the pixel values of [-1, 1] for GAN input. | |
Author: liuhh02 https://machinelearningtutorials.weebly.com/ | |
""" | |
from PIL import Image | |
import numpy as np | |
from os import listdir | |
def normalize(arr): | |
''' Function to scale an input array to [-1, 1] ''' | |
arr_min = arr.min() | |
arr_max = arr.max() | |
# Check the original min and max values | |
print('Min: %.3f, Max: %.3f' % (arr_min, arr_max)) | |
arr_range = arr_max - arr_min | |
scaled = np.array((arr-arr_min) / float(arr_range), dtype='f') | |
arr_new = -1 + (scaled * 2) | |
# Make sure min value is -1 and max value is 1 | |
print('Min: %.3f, Max: %.3f' % (arr_new.min(), arr_new.max())) | |
return arr_new | |
# path to folder containing images | |
path = './directory/to/image/folder/' | |
# loop through all files in the directory | |
for filename in listdir(path): | |
# load image | |
image = Image.open(path + filename) | |
# convert to numpy array | |
image = np.array(image) | |
# scale to [-1,1] | |
image = normalize(image) |
@JohanRaniseth Amazing work! Thank you for sharing it here :)
Hello @liuhh02, would it be possible to identify in your code what parameters I would need to change should I require a different scale of values other than [-1 1].
Hello @liuhh02, would it be possible to identify in your code what parameters I would need to change should I require a different scale of values other than [-1 1].
Sure! This line of code scales the values to [0, 1]:
scaled = np.array((arr-arr_min) / float(arr_range), dtype='f')
So you may modify
arr_new = -1 + (scaled * 2)
accordingly to scale it to the range you require.
@liuhh02 Nice approach, thank you for your input
Thanks to @liuhh02 and @JohanRaniseth for the work. For anyone visiting in the future: you have to normalize based on the min and max of your whole (training) dataset, not every image individually like in the code provided above (see #950). If you normalize individually, you will lose information and be unable to reverse the process later.
I also had to make another change to the tensor2im function in util.py by changing this line to just return image_numpy.
So an update for people who might come looking here. Got the PyTorch pix2pix implementation to work with a few changes. I ended up doing all image pre-processing and post-processing outside the PyTorch implementation so that I just inserted my [-1,1] normalized images and got out [-1,1] normalized output images. You can probably do this inside the pix2pix code if you want.
In
util.py
the functiontensor2im
transforms a tensor to an image array BUT it also rescales your image since it assumes your images only have values from 0-255 which is usually not the case for tifs. So change the lineimage_numpy = (np.transpose(image_numpy, (1, 2, 0))+1) / 2.0 * 255.0
toimage_numpy = (np.transpose(image_numpy, (1, 2, 0)))
OR put in your own rescaling/post-processing. Also remove the linesI did the rescaling afterwards in a separate .py script which does the reverse of the normalization from @liuhh02
In the
save_image
function change #Original to #ChangedIn
aligned_dataset.py
your input image is put into a tensor and several transforms are applied to it.Here you only want to change the way the image is loaded since you dont want it to convert into RGB.
AB = Image.open(AB_path).convert('RGB')
Changed to
AB = Image.open(AB_path)
Lastly in the
base_dataset.py
you want to change the functionget_transform
.You want to remove
if grayscale:
. The Github page for the transforms.Grayscale is currently down while writing this, but if I remember correctly it tries to scale an [0,255] range image to [0,1] grayscale. And if you have already normalized to [-1,1] this transformation just reduced all your values to 0.You also want to make changes to
if convert:
. If your images are already normalized you need to remove the normalization here. OR insert the normalization code from @liuhh02Should be changed to
I think these were all the changes I made.
Best,
Johan