Skip to content

Instantly share code, notes, and snippets.

@rohan-paul
Last active September 16, 2021 11:26
Show Gist options
  • Select an option

  • Save rohan-paul/ecbddda81e958fdf4aafcedda23cb349 to your computer and use it in GitHub Desktop.

Select an option

Save rohan-paul/ecbddda81e958fdf4aafcedda23cb349 to your computer and use it in GitHub Desktop.
epochs = 300
batch_size = 64
loss_from_discriminator_model=[] # Array to collect loss for the discriminator model
loss_from_generator_model=[] # Array to collect loss for generator model
with tf.device('/gpu:0'):
for epoch in range(epochs):
print(f"Currently training on Epoch {epoch+1}")
# Loop over each batch in the dataset
for i in range(training_images.shape[0]//batch_size):
# Benefits of Double Division Operator over Single Division Operator in Python
# The Double Division operator in Python returns the floor value for both integer and floating-point arguments after division.
if (i)%100 == 0:
print(f"\tCurrently training on batch number {i} of {len(training_images)//batch_size}")
# Start by sampling a batch of noise vectors from a uniform distribution
# generator receives a random seed as input which is used to produce an image.
noise=np.random.uniform(-1,1,size=[batch_size, noise_shape])
''' Generate a batch of fake images using the generator network
The difference between predict() and predict_on_batch() - lies in when you pass as x data that is larger than one batch.
predict() - will go through all the data, batch by batch, predicting labels. It thus internally does the splitting in batches and feeding one batch at a time.
predict_on_batch() - on the other hand, assumes that the data you pass in is exactly one batch and thus feeds it to the network. It will not try to split it
In summary, predict method has extra operations to ensure a collection of batches are processed right, whereas, predict_on_batch is a lightweight alternative to predict that should be used on a single batch.
'''
gen_image = generator.predict_on_batch(noise)
# We do this by first sampling some random noise from a random uniform distribution,
# then getting the generator’s predictions on the noise.
# The noise variable is the code equivalent of the variable z, which we discussed earlier.
# Now I am taking real x_train data
# by sampling a batch of real images from the set of all image
train_dataset = training_images[i*batch_size:(i+1)*batch_size]
# Create Labels
# First training on real image
train_labels_real=np.ones(shape=(batch_size,1))
discriminator.trainable = True
# Next, train the discriminator network on real images and real labels:
d_loss_real = discriminator.train_on_batch(train_dataset,train_labels_real)
#Now training on fake image
train_labels_fake=np.zeros(shape=(batch_size,1))
d_loss_fake = discriminator.train_on_batch(gen_image,train_labels_fake)
# Creating variables to make ready the whole adversarial network
noise=np.random.uniform(-1,1,size=[batch_size,noise_shape])
# Image Label vector that has all the values equal to 1
# To fool the Discriminator Network
train_label_fake_for_gen_training =np.ones(shape=(batch_size,1))
discriminator.trainable = False
''' Now train the generator
To train the generator network, we have to train the adversarial model.
When we train the adversarial model, it trains the generator network only
but freezes the discriminator network. We won't train the discriminator
network, as we have already trained it.
'''
g_loss = GAN.train_on_batch(noise, train_label_fake_for_gen_training)
''' So what I am doing above in short is,
I train the adversarial model on the batch of noise vectors and real
labels. Here, real labels is a vector with all values equal to 1.
I am also training the generator to fool the discriminator network. To do
this, I provide it with a vector that has all the values equal to 1.
In this step, the generator will receive feedback from the generator
network and improve itself accordingly.
'''
loss_from_discriminator_model.append(d_loss_real+d_loss_fake)
loss_from_generator_model.append(g_loss)
''' There is a passive method to evaluate the training process. After every 50
epochs, generate fake images and manually check the quality of the images:
These images will help you to decide whether to continue the training or to
stop it early. Stop the training if quality of the generated high-resolution
images is good. Or continue the training until your model becomes good.
'''
if epoch % 50 == 0:
samples = 10
x_fake = generator.predict(np.random.normal(loc=0, scale=1, size=(samples,100)))
for k in range(samples):
plt.subplot(2, 5, k+1)
plt.imshow(x_fake[k].reshape(64,64,3))
plt.xticks([])
plt.yticks([])
plt.tight_layout()
plt.show()
print('Epoch: %d, Loss: D_real = %.3f, D_fake = %.3f, G = %.3f' % (epoch+1, d_loss_real, d_loss_fake, g_loss))
print('Training completed with all epochs')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment