Created
July 9, 2019 12:17
-
-
Save e-lin/0139e4e600978173f8d802180889f0bd to your computer and use it in GitHub Desktop.
Training one Autoencoder at a time in a single graph
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
optimizer = tf.train.AdamOptimizer(learning_rate) | |
with tf.name_scope("phase1"): | |
phase1_outputs = tf.matmul(hidden1, weights4) + biases4 # bypass hidden2 and hidden3 | |
phase1_reconstruction_loss = tf.reduce_mean(tf.square(phase1_outputs - X)) | |
phase1_reg_loss = regularizer(weights1) + regularizer(weights4) | |
phase1_loss = phase1_reconstruction_loss + phase1_reg_loss | |
phase1_training_op = optimizer.minimize(phase1_loss) | |
with tf.name_scope("phase2"): | |
phase2_reconstruction_loss = tf.reduce_mean(tf.square(hidden3 - hidden1)) | |
phase2_reg_loss = regularizer(weights2) + regularizer(weights3) | |
phase2_loss = phase2_reconstruction_loss + phase2_reg_loss | |
train_vars = [weights2, biases2, weights3, biases3] | |
phase2_training_op = optimizer.minimize(phase2_loss, var_list=train_vars) # freeze hidden1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment