diff --git a/README.md b/README.md index 3ccb7af5d586b931ea5d9def658b5e346f2dd9bb..a4f945f2940c99961cf9f56dd7bbc7f725348148 100644 --- a/README.md +++ b/README.md @@ -194,6 +194,17 @@ We will train the diffusion model on the MNIST dataset using the **diffusers** l | **Skip Connections** | Yes | Yes | | **Time Embedding** | No | Yes | +## Results + +Here a visual results from the U-Net models: + +**Diffusion U-Net2D** + + +**Conditional GAN U-Net (cGAN)** + + + ### Conclusion In this section, we have outlined the architecture and training process for a Diffusion model using a U-Net. This model is trained to perform image denoising, progressively refining noisy images into clean ones. We compared it with the U-Net used in cGANs, highlighting the key differences and how they are tailored to their respective tasks.