Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • main
1 result

Target

Select target project
  • edelland/mso_3_4-td2
  • colasa/gan-cgan
  • rfarssi/mso3_4-be2_cgan
  • ssamuelm/mso3_4-be2_cgan
  • skhedhri/mso3_4-be2_cgan
  • tnavarro/gan-cgan
  • pmuller/mso3_4-be2_cgan
  • hmorillo/mso3_4-be2_cgan
  • mmachado/gan-cgan
  • bcornill/gan-cgan
  • fpennace/gan-cgan
  • egennari/gan-cgan
  • pbrussar/mso3_4-be2_cgan
  • bgourdin/mso3_4-be2_cgan
  • sfruchar/mso3_4-be2_cgan
  • psergent/mso3_4-be2_cgan
  • sclary/mso3_4-be2_cgan
  • gononq/be-2-c-gan
  • sfredj/mso3_4-be2_cgan
  • alebtahi/gan-cgan
  • sballoum/mso3_4-be2_cgan
  • ielansar/mso3_4-be2_cgan
  • asennevi/mso_3_4-td2
  • jseksik/mso_3_4-td2
  • mguiller/gan-cgan
  • ochaufou/mso_3_4-td2
  • barryt/gan-cgan
  • mbabay/mso_3_4-td2
  • amaassen/mso_3_4-td2
  • cgerest/mso_3_4-td2
  • pmarin/mso_3_4-td2
  • bbrudysa/gan-cgan
  • hchauvin/mso_3_4-td2
  • tfassin/mso_3_4-td2
  • coulonj/gan-diffusion
  • tdesgreys/gan-diffusion
  • mbenyahi/gan-diffusion
37 results
Select Git revision
  • main
1 result
Show changes
Commits on Source (4)
......@@ -19,6 +19,9 @@ lib64/
parts/
sdist/
var/
cGAN_pretrained_models/
data/
facades/
wheels/
pip-wheel-metadata/
share/python-wheels/
......@@ -32,7 +35,8 @@ MANIFEST
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
*.zip
*.pth
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
......@@ -127,3 +131,4 @@ dmypy.json
# Pyre type checker
.pyre/
This diff is collapsed.
This diff is collapsed.
MIT License
Copyright (c) 2022 Samer KHEDHRI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
\ No newline at end of file
# GAN & cGAN tutorial.
We recommand to use the notebook (.ipynb) but the Python script (.py) is also provided if more convenient for you.
Project developed by :
* Samer Khedhri
# How to submit your Work ?
this project is about Generative Adversarial Networks (GANs) suggested in research papers
GANs, or Generative Adversarial Networks, are a type of artificial neural network. The network consists of two main components: a generator and a discriminator.
This work must be done individually. The expected output is a repository named gan-cgan on https://gitlab.ec-lyon.fr. It must contain your notebook (or python files) and a README.md file that explains briefly the successive steps of the project. The last commit is due before 11:59 pm on Wednesday, March 29, 2023. Subsequent commits will not be considered.
\ No newline at end of file
The generator is responsible for creating synthetic data, such as images, sounds, or even text, that resembles the real data. The discriminator, on the other hand, is responsible for determining whether the generated data is real or fake.
The two components are trained together in a "game-like" manner, where the generator is trying to create more realistic data, and the discriminator is trying to correctly classify the data as real or fake. This competition between the two components results in the generator improving over time and producing more realistic data.
## Getting started
This project is implemented as a part of MSO_3_4 Apprentissage automatique practical work. <br>
It mainly two parts. both parts are in the same file (notebook) <br>
Make sure that you use Python 3.10 to avoid any compatability problems.<br>
## Objective
Discover GANs, understand how they are implemented and then explore one specific architecture of GANs that allows us to perform image to image translation.
## Data
The data used in this project, we used two datasets : <br>
**MINST Dataset :** Modified National Institute of Standards and Technology database is a large dataset of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning.<br>
![MINST dataset](images/MINST.jpg)
**CMP Facade Dataset :** A dataset of facade images assembled at the Center for Machine Perception, which includes 606 rectified images of facades from various sources, which have been manually annotated. The facades are from different cities around the world and diverse architectural styles.<br>
![facade dataset](images/facade_data.jpg)
## DC-GAN
In this part, we dive into an introduction to DCGANs through an example. We train a generative adversarial network (GAN) to generate new handwritten digits after showing it pictures of many real digits from the MINST Dataset.<br>
A DCGAN is a direct extension of the GANs, except that it explicitly uses convolutional and convolutional-transpose layers in the discriminator and generator.
After implementing and testiong the model, i obtained the following result plots:
* The Loss :
![Loss](images/loss.png)
* The images generated :
![fake images](images/real_fake.png)
# Conditional GAN (cGAN) :
cGAN stands for Conditional Generative Adversarial Network. It is a type of deep learning model that can generate images, audio, and other forms of data.
The cGAN architecture is based on the Generative Adversarial Network (GAN) architecture, which consists of two neural networks: a generator and a discriminator.
In cGAN, the generator is conditioned on some input data, which allows it to generate data that is tailored to a specific input.
After implementing the generator and the descriminator, the testing of the model on the valisation dataset gave the following results:
* training loss of the generator and descriminator :
![Training loss](images/loss_2.png)
* Model trained for 100 epochs :
![image generated](images/100_val.png)
* Model trained for 200 epochs :
![image generated](images/200_val.png)
All the code and results are provided in the notebook BE_GAN_and_cGAN.ipynb
# Libraries
In order to test the work, you need to install these following python libraries :
* torch
* numpy
* matplotlib
* IPython
* imageio
* opencv-python
# Licence
MIT
images/100_val.png

441 KiB

images/200_val.png

401 KiB

images/MINST.jpg

67.1 KiB

images/facade_data.jpg

62.9 KiB

images/loss.png

53.7 KiB

images/loss_2.png

32.9 KiB

images/real_fake.png

221 KiB

images/train_loss.png

32.9 KiB