Implement sample_noise
Answer
Look at Unflatten
Implement Discriminator
The Architecture
- Fully connected layer with input size 784 and output size 256
- LeakyReLU with alpha 0.01
- Fully connected layer with input_size 256 and output size 256
- LeakyReLU with alpha 0.01
- Fully connected layer with input size 256 and output size 1
Answer
Implement Generator
The Architecture
- Fully connected layer from noise_dim to 1024
- ReLU
- Fully connected layer with size 1024
- ReLU
- Fully connected layer with size 784
- TanH (to clip the image to be in the range of [-1,1])
Answer
Understand BCE Loss
Implement Generator Loss
You can use BCE Loss
Answer
Implement Discriminator Loss
You can use BCE Loss
Answer
Implement get_optimizer
Answer
Understand Training Loop
Least Squares GAN
Least Squared GAN is a newer, more stable alternative to the original GAN loss function.
We’ll be implementing Equation 9 from the paper
Note: whe plugging in for D(x) and D(G(z)) use the output from the discriminator (scores_real
and scores_fake
)
Implement ls_discriminator_loss
Answer
Implement ls_generator_loss
Answer
Implement build_dc_classifier
Use a discrminator inspire by the TensorFlow MNIST tutorial, which is pretty dang efficient
- Reshape into image tensor (Use Unflatten!)
- Conv2D: 32 Filters, 5x5, Stride 1
- Leaky ReLU(alpha=0.01)
- Max Pool 2x2, Stride 2
- Conv2D: 64 Filters, 5x5, Stride 1
- Leaky ReLU(alpha=0.01)
- Max Pool 2x2, Stride 2
- Flatten
- Fully Connected with output size 4 x 4 x 64
- Leaky ReLU(alpha=0.01)
- Fully Connected with output size 1
Answer
Similarly, implement Generator
Architecture
- Fully connected with output size 1024
- ReLU
- BatchNorm
- Fully connected with output size 7 x 7 x 128
- ReLU
- BatchNorm
- Reshape into Image Tensor of shape 7, 7, 128
- Conv2D^T (Transpose): 64 filters of 4x4, stride 2, ‘same’ padding (use padding=1)
- ReLU
- BatchNorm
- Conv2D^T (Transpose): 1 filter of 4x4, stride 2, ‘same’ padding (use padding=1)
- TanH
- Should have a 28x28x1 image, reshape back into 784 vector
Answer