The aim of this assignment is to discover GANs, understand how they are implemented and then explore one specific architecture of GANs that allows us to perform image to image translation (which corresponds to the picture that you can see above this text ! )
Before starting the exploration of the world of GANs, here's what students should do and send back for this assignement :
* In the "tutorial" parts of this assignement that focus on explaining new concepts, you'll find <font color='red'>**questions**</font> that aim to test your understanding of those concepts.
* In some of the code cells, you'll have to complete the code and you'll find a "TO DO" explaining what you should implement.
# Part1: DC-GAN
In this part, we aim to learn and understand the basic concepts of **Generative Adversarial Networks** through a DCGAN and generate new celebrities from the learned network after showing it real celebrities. For this purpose, please study the tutorial here: https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
##Work to do
Now we want to generate handwritten digits using the MNIST dataset. It is available within torvision package (https://pytorch.org/vision/stable/generated/torchvision.datasets.MNIST.html#torchvision.datasets.MNIST)
Please re-train the DCGAN and display some automatically generated handwritten digits.
"""
#TO DO: your code here to adapt the code from the tutorial to experiment on MNIST dataset
"""# Part2: Conditional GAN (cGAN)
Let's take the example of the set described in the next picture.

We have a picture of a map (from Google Maps) and we want to create an image of what the satellite view may look like.
As we are not only trying to generate a random picture but a mapping between a picture to another one, we can't use the standard GAN architecture. We will then use a cGAN.
A cGAN is a supervised GAN aiming at mapping a label picture to a real one or a real picture to a label one. As you can see in the diagram below, the discriminator will take as input a pair of images and try to predict if the pair was generated or not. The generator will not only generate an image from noise but will also use an image (label or real) to generate another one (real or label).

### Generator
In the cGAN architecture, the generator chosen is a U-Net.
A U-Net takes as input an image, and outputs another image.
It can be divided into 2 subparts : an encoder and a decoder.
* The encoder takes the input image and reduces its dimension to encode the main features into a vector.
* The decoder takes this vector and map the features stored into an image.
A U-Net architecture is different from a classic encoder-decoder in that every layer of the decoder takes as input the previous decoded output as well as the output vector from the encoder layers of the same level. It allows the decoder to map low frequencies information encoded during the descent as well as high frequencies from the original picture.
The encoder will take as input a colored picture (3 channels: RGB), it will pass through a series of convolution layers to encode the features of the picture. It will then be decoded by the decoder using transposed convolutional layers. These layers will take as input the previous decoded vector AND the encoded features of the same level.
Now, let's create or cGAN to generate facades from a template image. For this purpose, we will use the "Facade" dataset available at http://cmp.felk.cvut.cz/~tylecr1/facade/.
Let's first create a few classes describing the layers we will use in the U-Net.
"""
# Importing all the libraries needed
importmatplotlib.pyplotasplt
importimageio
importglob
importrandom
importos
importnumpyasnp
importmath
importitertools
importtime
importdatetime
importcv2
frompathlibimportPath
fromPILimportImage
fromtorch.utils.dataimportDataset,DataLoader
importtorchvision.transformsastransforms
fromtorchvision.utilsimportsave_image,make_grid
fromtorchvisionimportdatasets
fromtorch.autogradimportVariable
importtorch.nnasnn
importtorch.nn.functionalasF
importtorch
# code adapted from https://github.com/milesial/Pytorch-UNet/blob/master/unet/unet_parts.py
# At this stage x8 is our encoded vector, we will now decode it
x=self.up7(x8,x7)
x=self.up6(x,x6)
x=self.up5(x,x5)
x=self.up4(x,x4)
x=self.up3(x,x3)
x=self.up2(x,x2)
x=self.up1(x,x1)
x=self.outc(x)
returnx
# We take images that have 3 channels (RGB) as input and output an image that also have 3 channels (RGB)
generator=U_Net(3,3)
# Check that the architecture is as expected
generator
"""You should now have a working U-Net.
<font color='red'>**Question 1**</font>
Knowing the input and output images will be 256x256, what will be the dimension of the encoded vector x8 ?
<font color='red'>**Question 2**</font>
As you can see, U-net has an encoder-decoder architecture with skip connections. Explain why it works better than a traditional encoder-decoder.
### Discriminator
In the cGAN architecture, the chosen discriminator is a Patch GAN. It is a convolutional discriminator which enables to produce a map of the input pictures where each pixel represents a patch of size NxN of the input.
The size N is given by the depth of the net. According to this table :
| Number of layers | N |
| ---- | ---- |
| 1 | 16 |
| 2 | 34 |
| 3 | 70 |
| 4 | 142 |
| 5 | 286 |
| 6 | 574 |
The number of layers actually means the number of layers with `kernel=(4,4)`, `padding=(1,1)` and `stride=(2,2)`. These layers are followed by 2 layers with `kernel=(4,4)`, `padding=(1,1)` and `stride=(1,1)`.
In our case we are going to create a 70x70 PatchGAN.
As we want a 70x70 Patch GAN, the architecture will be as follows :
```
1. C64 - K4, P1, S2
2. C128 - K4, P1, S2
3. C256 - K4, P1, S2
4. C512 - K4, P1, S1
5. C1 - K4, P1, S1 (output)
```
Where Ck denotes a convolution block with k filters, Kk a kernel of size k, Pk is the padding size and Sk the stride applied.
*Note :* For the first layer, we do not use batchnorm.
<font color='red'>**Question 3**</font>
Knowing the input and output images will be 256x256, what will be the dimension of the encoded vector x8 ?Knowing input images will be 256x256 with 3 channels each, how many parameters are there to learn ?
"""
classPatchGAN(nn.Module):
def__init__(self,n_channels,n_classes):
super(PatchGAN,self).__init__()
# TODO :
# create the 4 first layers named conv1 to conv4
self.conv1=
self.conv2=
self.conv3=
self.conv4=
# output layer
self.out=out_block(512,n_classes)
defforward(self,x1,x2):
x=torch.cat([x2,x1],dim=1)
x=self.conv1(x)
x=self.conv2(x)
x=self.conv3(x)
x=self.conv4(x)
x=self.out(x)
returnx
# We have 6 input channels as we concatenate 2 images (with 3 channels each)
discriminator=PatchGAN(6,1)
discriminator
"""You should now have a working discriminator.
### Loss functions
As we have seen in the choice of the various architectures for this GAN, the issue is to map both low and high frequencies.
To tackle this problem, this GAN rely on the architecture to map the high frequencies (U-Net + PatchGAN) and the loss function to learn low frequencies features. The global loss function will indeed be made of 2 parts :
* the first part to map hight frequencies, will try to optimize the mean squared error of the GAN.
* the second part to map low frequencies, will minimize the $\mathcal{L}_1$ norm of the generated picture.
So the loss can be defined as $$ G^* = arg\ \underset{G}{min}\ \underset{D}{max}\ \mathcal{L}_{cGAN}(G,D) + \lambda \mathcal{L}_1(G)$$
"""
# Loss functions
criterion_GAN=torch.nn.MSELoss()
criterion_pixelwise=torch.nn.L1Loss()
# Loss weight of L1 pixel-wise loss between translated image and real image
lambda_pixel=100
"""### Training and evaluating models"""
# parameters
epoch=0# epoch to start training from
n_epoch=200# number of epochs of training
batch_size=10# size of the batches
lr=0.0002# adam: learning rate
b1=0.5# adam: decay of first order momentum of gradient
b2=0.999# adam: decay of first order momentum of gradient
decay_epoch=100# epoch from which to start lr decay
img_height=256# size of image height
img_width=256# size of image width
channels=3# number of image channels
sample_interval=500# interval between sampling of images from generators
checkpoint_interval=-1# interval between model checkpoints
cuda=Trueiftorch.cuda.is_available()elseFalse# do you have cuda ?
"""Download the dataset."""
importurllib.request
fromtqdmimporttqdm
importos
importzipfile
defdownload_hook(t):
"""Wraps tqdm instance.
Don't forget to close() or __exit__()
the tqdm instance once you're done with it (easiest using `with` syntax).
print('There isn\' a training available with this number of epochs')
load_model(epoch=200)
# switching mode
generator.eval()
# show a sample evaluation image on the training base
image,mask=next(iter(dataloader))
output=generator(mask.type(Tensor))
output=output.view(16,3,256,256)
output=output.cpu().detach()
foriinrange(8):
image_plot=reverse_transform(image[i])
output_plot=reverse_transform(output[i])
mask_plot=reverse_transform(mask[i])
plot2x3Array(mask_plot,image_plot,output_plot)
# show a sample evaluation image on the validation dataset
image,mask=next(iter(val_dataloader))
output=generator(mask.type(Tensor))
output=output.view(8,3,256,256)
output=output.cpu().detach()
foriinrange(8):
image_plot=reverse_transform(image[i])
output_plot=reverse_transform(output[i])
mask_plot=reverse_transform(mask[i])
plot2x3Array(mask_plot,image_plot,output_plot)
"""<font color='red'>**Question 4**</font>
Compare results for 100 and 200 epochs
"""
# TO DO : Your code here to load and evaluate with a few samples
# a model after 100 epochs
# And finally :
ifcuda:
torch.cuda.empty_cache()
"""# How to submit your Work ?
Your work should be uploaded within 3 weeks into the Moodle section "Devoir 2 - GAN et Conditional GAN". It can be either a notebook containing your code and a description of your work, experiments and results or a ".zip" file containing your report in a "pdf" format describing your work, experiments and results as well as your code (".py" Python files).
We recommand to use the notebook (.ipynb) but the Python script (.py) is also provided if more convenient for you.
-**Student** : Pierre Muller
-**Teacher** : Quentin Gallouecdec
## Purpose
The aim of this assignment is to discover GANs, understand how they are implemented and then explore one specific architecture of GANs that allows us to perform image to image translation (which corresponds to the picture that you can see above this text ! ). Through **[BE2_GAN_and_cGAN.ipynb](BE2_GAN_and_cGAN.ipynb)**, we are going to use DCGAN model to generate handwritten digits using the MNIST dataset. Then, we will use cGAN (a supervised GAN aiming at mapping a label picture to a real one or a real picture to a label one) and its generator/discriminator architecture to generate images from the **["Facade"](http://cmp.felk.cvut.cz/~tylecr1/facade/)** dataset.
\\
Check out the Jupyter notebook to learn in details about this project : **[BE2_GAN_and_cGAN.ipynb](BE2_GAN_and_cGAN.ipynb)**.
# How to submit your Work ?
This work must be done individually. The expected output is a repository named gan-cgan on https://gitlab.ec-lyon.fr. It must contain your notebook (or python files) and a README.md file that explains briefly the successive steps of the project. The last commit is due before 11:59 pm on Wednesday, March 29, 2023. Subsequent commits will not be considered.