Skip to content
Snippets Groups Projects
Commit 55db25ed authored by Rahma FARSSI's avatar Rahma FARSSI :sparkler:
Browse files

Update README.md

parent 92ed2e82
Branches main
No related tags found
No related merge requests found
# GAN & cGAN tutorial.
# GAN & cGAN tutorial
MSO 3.4 Machine Learning
We recommand to use the notebook (.ipynb) but the Python script (.py) is also provided if more convenient for you.
# How to submit your Work ?
[![](https://img.shields.io/badge/License-MIT-blue.svg)](https://rfarssi.mit-license.org/)
[![Travis](https://img.shields.io/badge/language-Python-red.svg)](https://www.python.org/)
This work must be done individually. The expected output is a repository named gan-cgan on https://gitlab.ec-lyon.fr. It must contain your notebook (or python files) and a README.md file that explains briefly the successive steps of the project. The last commit is due before 11:59 pm on Wednesday, March 29, 2023. Subsequent commits will not be considered.
\ No newline at end of file
# Description
The aim of this assignment is to discover GANs, understand how they are implemented, and then explore one specific architecture of GANs that allows us to perform image-to-image translation. The current assignment consists of two parts :
- The first part focuses on understanding the fundamental concepts of Generative Adversarial Networks (GANs) through a DCGAN example.
- In the second part, we will implement and train a conditional GAN (cGAN) to generate facades.
# Part 1 : DC-GAN
In this section, we will dive deeper into Generative Adversarial Networks and explore how they can be used to generate new images. We will be using a DCGAN (Deep Convolutional Generative Adversarial Network) to generate new images of handwritten digits using the [MNIST](https://pytorch.org/vision/stable/generated/torchvision.datasets.MNIST.html) dataset.
To complete this task, we will need to retrain the DCGAN and generate some samples of automatically generated handwritten digits. We did this by following the PyTorch [DCGAN tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html).
## Losses during training of the generator and discriminator
![DCGAN_loss](results/loss_g_d.png)
## Visual Comparison between Real and Generated Images
![DCGAN_output](results/output_mnist.png)
# Part 2 : Conditional GAN (cGAN)
This section involves training a cGAN to distinguish between real and generated building facades images from the [CMP Facade Database](https://cmp.felk.cvut.cz/~tylecr1/facade/). The generator takes a mask from the database and produces a realistic image as output. Here are some of the images obtained as a result of this process.
- **100 epochs**
![100epochs](results/200epochs.png)
- **200 epochs**
![200epochs](results/100epochs.png)
With a larger number of epochs applied to the training set, the quality of the generated images is expected to improve.
# About the Project
## Project status
[![Project Status: Active – The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://gitlab.ec-lyon.fr/rfarssi/image-classification)
## Contribution
Any contributions that improve the quality of this project are welcome [here](https://gitlab.ec-lyon.fr/rfarssi/mso3_4-be2_cgan/-/issues).
## References
- [GAN & cGAN tutorial](https://gitlab.ec-lyon.fr/edelland/mso3_4-be2_cgan) by _Emmanuel Dellandrea_
## License
[![](https://img.shields.io/badge/License-MIT-blue.svg)](https://rfarssi.mit-license.org/) was used to grant permission for this project.
## Author
[© Rahma FARSSI](https://gitlab.ec-lyon.fr/rfarssi)
<a href="mailto:rahma.farssi@master.ec-lyon.fr? &body=Hi Rahma">📧</a>
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment