From 350c6b566f19e301aa4dd7e4ea1af13470293a43 Mon Sep 17 00:00:00 2001 From: Samer-kh <samer.elkhidri@ensi-uma.tn> Date: Wed, 22 Mar 2023 17:26:21 +0100 Subject: [PATCH] correcting ReadMe --- README.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index f1ccd01..e64ed2a 100644 --- a/README.md +++ b/README.md @@ -25,11 +25,11 @@ Discover GANs, understand how they are implemented and then explore one specific The data used in this project, we used two datasets : <br> **MINST Dataset :** Modified National Institute of Standards and Technology database is a large dataset of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning.<br> - + **CMP Facade Dataset :** A dataset of facade images assembled at the Center for Machine Perception, which includes 606 rectified images of facades from various sources, which have been manually annotated. The facades are from different cities around the world and diverse architectural styles.<br> - + ## DC-GAN @@ -41,11 +41,11 @@ After implementing and testiong the model, i obtained the following result plots * The Loss : - + * The images generated : - + # Conditional GAN (cGAN) : @@ -55,12 +55,12 @@ In cGAN, the generator is conditioned on some input data, which allows it to gen After implementing the generator and the descriminator, the testing of the model on the valisation dataset gave the following results: -* training loss of the generator and descriminator : - +* training loss of the generator and descriminator : + * Model trained for 100 epochs : - + * Model trained for 200 epochs : - + All the code and results are provided in the notebook BE_GAN_and_cGAN.ipynb # Libraries -- GitLab