diff --git a/README.md b/README.md index f1ccd0138b0d50026a9276c4895a19448f57346d..e64ed2a732767b5b315a36ea413c706cf1ece1d7 100644 --- a/README.md +++ b/README.md @@ -25,11 +25,11 @@ Discover GANs, understand how they are implemented and then explore one specific The data used in this project, we used two datasets : <br> **MINST Dataset :** Modified National Institute of Standards and Technology database is a large dataset of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning.<br> - + **CMP Facade Dataset :** A dataset of facade images assembled at the Center for Machine Perception, which includes 606 rectified images of facades from various sources, which have been manually annotated. The facades are from different cities around the world and diverse architectural styles.<br> - + ## DC-GAN @@ -41,11 +41,11 @@ After implementing and testiong the model, i obtained the following result plots * The Loss : - + * The images generated : - + # Conditional GAN (cGAN) : @@ -55,12 +55,12 @@ In cGAN, the generator is conditioned on some input data, which allows it to gen After implementing the generator and the descriminator, the testing of the model on the valisation dataset gave the following results: -* training loss of the generator and descriminator : - +* training loss of the generator and descriminator : + * Model trained for 100 epochs : - + * Model trained for 200 epochs : - + All the code and results are provided in the notebook BE_GAN_and_cGAN.ipynb # Libraries