From d57b1ee8fa77d728d2a818dca0ea8c20ba55452b Mon Sep 17 00:00:00 2001
From: Benyahia Mohammed Oussama <mohammed.benyahia@etu.ec-lyon.fr>
Date: Wed, 2 Apr 2025 09:39:44 +0000
Subject: [PATCH] Replace BE2_GAN_and_Diffusion.ipynb

---
 BE2_GAN_and_Diffusion.ipynb | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/BE2_GAN_and_Diffusion.ipynb b/BE2_GAN_and_Diffusion.ipynb
index 3d20895..9c085c6 100644
--- a/BE2_GAN_and_Diffusion.ipynb
+++ b/BE2_GAN_and_Diffusion.ipynb
@@ -4015,8 +4015,8 @@
         "id": "Rg48ebo6zJ28"
       },
       "source": [
-        "## **1. UNet in cGAN Generator**\n",
-        "This UNet is used as the generator in a Conditional GAN (cGAN), typically for image-to-image translation tasks. The key characteristics are:\n",
+        "## **1. UNet Model**\n",
+        "This UNet is typically for image-to-image translation tasks. The key characteristics are:\n",
         "\n",
         "- **Encoder-Decoder Structure**: Uses downsampling (down1 to down7) with **Conv2D + BatchNorm + LeakyReLU** layers and upsampling (up7 to up1) with **ConvTranspose2D + BatchNorm + ReLU**.\n",
         "- **Skip Connections**: Each downsampling layer has a corresponding upsampling layer that concatenates feature maps (e.g., `up6` receives outputs from `down6`).\n",
@@ -4033,7 +4033,7 @@
         "- **GroupNorm Instead of BatchNorm**: More stable for diffusion-based models.\n",
         "\n",
         "## **Table of Differences**\n",
-        "| Feature | cGAN UNet | Diffusion UNet (DDPM) |\n",
+        "| Feature | UNet Model | Diffusion UNet2DModel (DDPM) |\n",
         "|---------|-------------------------|-----------------------|\n",
         "| **Task** | Image-to-image translation | Image denoising (diffusion) |\n",
         "| **Downsampling** | Strided Conv2D + BatchNorm + LeakyReLU | ResNet Blocks + GroupNorm + SiLU |\n",
@@ -4044,8 +4044,8 @@
         "| **Time Embedding** | No | Yes |\n",
         "\n",
         "## **Conclusion**\n",
-        "- The **cGAN UNet** is optimized for generative tasks where the goal is to generate images conditioned on inputs.\n",
-        "- The **Diffusion UNet** is optimized for noise removal, requiring time embeddings and ResNet-style feature extraction."
+        "- The **UNet model** is optimized for generative tasks where the goal is to generate images conditioned on inputs.\n",
+        "- The **Diffusion UNet2DModel** is optimized for noise removal, requiring time embeddings and ResNet-style feature extraction."
       ]
     },
     {
@@ -5239,13 +5239,13 @@
       "source": [
         "### Comparison of Noise Prediction Models for Image Denoising\n",
         "\n",
-        "This section compares the **UNet Diffusion model** and the **UNet cGAN model** in terms of their performance for image denoising. Both models leverage the UNet architecture, but they differ significantly in their approach.\n",
+        "This section compares the **UNet Diffusion model** and the **UNet model** in terms of their performance for image denoising. Both models leverage the UNet architecture, but they differ significantly in their approach.\n",
         "\n",
-        "- **UNet Diffusion Model**: This model operates iteratively, gradually denoising the image over multiple steps. While it provides high-quality results, it is computationally expensive due to the repeated noise addition and removal process, making it slower for real-time applications.\n",
+        "- **Diffusion UNet2DModel**: This model operates iteratively, gradually denoising the image over multiple steps. While it provides high-quality results, it is computationally expensive due to the repeated noise addition and removal process, making it slower for real-time applications.\n",
         "  \n",
-        "- **UNet cGAN Model**: In contrast, the **UNet cGAN model** operates in a single step, using adversarial training to generate denoised images. This allows for faster inference, making it more suitable for real-time applications. However, the **UNet cGAN model** struggles to predict the noise effectively, which leads to incomplete denoising. As a result, the denoised images are often not fully cleaned and may still contain visible noise, resulting in unrecognizable content.\n",
+        "- **UNet Model**: In contrast, the **UNet model** operates in a single step, using adversarial training to generate denoised images. This allows for faster inference, making it more suitable for real-time applications. However, the **UNet model** struggles to predict the noise effectively, which leads to incomplete denoising. As a result, the denoised images are often not fully cleaned and may still contain visible noise, resulting in unrecognizable content.\n",
         "\n",
-        "**Conclusion**: The **UNet Diffusion model** excels in denoising quality due to its iterative process, which allows for more accurate and efficient noise removal. However, it is computationally expensive and less suited for real-time applications. On the other hand, the **UNet cGAN model** is more efficient in terms of speed, making it suitable for time-sensitive tasks, but it fails to effectively predict and remove noise. This leads to suboptimal denoising, where the images are not adequately cleaned, and the content remains partially distorted. This explains the difference in results: the **UNet Diffusion model** produces cleaner, artifact-free images, while the **UNet cGAN model** struggles with noise removal, leaving visible artifacts in the output."
+        "**Conclusion**: The **Diffusion UNet2DModel** excels in denoising quality due to its iterative process, which allows for more accurate and efficient noise removal. However, it is computationally expensive and less suited for real-time applications. On the other hand, the **UNet model** is more efficient in terms of speed, making it suitable for time-sensitive tasks, but it fails to effectively predict and remove noise. This leads to suboptimal denoising, where the images are not adequately cleaned, and the content remains partially distorted. This explains the difference in results: the **Diffusion UNet2DModel** produces cleaner, artifact-free images, while the **UNet model** struggles with noise removal, leaving visible artifacts in the output."
       ]
     },
     {
-- 
GitLab