Skip to content
Snippets Groups Projects
Commit 27903bcb authored by Benyahia Mohammed Oussama's avatar Benyahia Mohammed Oussama
Browse files

Edit README.md

parent 181420b4
No related branches found
No related tags found
No related merge requests found
......@@ -71,6 +71,10 @@ To enhance image generation and reduce ambiguities between similar digits (e.g.,
- [DCGAN Tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html)
- [MNIST Dataset](https://pytorch.org/vision/stable/generated/torchvision.datasets.MNIST.html#torchvision.datasets.MNIST)
Here is the corrected version with only the necessary adjustments:
---
## Part 2: Conditional GAN (cGAN) with U-Net
### **Generator**
......@@ -88,11 +92,10 @@ The encoder takes a colored picture (3 channels: RGB), processes it through a se
![architecture Unet](images/unet_architecture.png)
### **Question:**
Knowing that the input and output images have a shape of 256x256 with 3 channels, what will be the dimension of the feature map "x8"?
**Answer:** The dimension of the feature map x8 is **[numBatch, 512, 1, 1]**.
**Answer:** The dimension of the feature map x8 is **[numBatch, 512, 32, 32]**.
### **Question:**
Why are skip connections important in the U-Net architecture?
......@@ -126,40 +129,43 @@ For this project, we use a **70×70 PatchGAN**.
![patchGAN](images/patchGAN.png)
question : how many learnable parameters this neural network has ?:
### **Question:**
How many learnable parameters does this neural network have?
1. conv1:
1. **conv1:**
- Input channels: 6
- Output channels: 64
- Kernel size: 4*4
- Parameters in conv1 = (4×4×6+1(bais))×64=6208
- Kernel size: 4×4
- Parameters in conv1 = (4×4×6+1(bias))×64 = **6208**
2. **conv2:**
- Weights: 4 × 4 × 64 × 128 = **131072**
- Biases: **128**
- BatchNorm: (scale + shift) for 128 channels: 2 × 128 = **256**
- Parameters in conv2: **131072 + 128 + 256 = 131456**
2. conv2:
- Weights: 4 × 4 × 64 × 128 = 131072
- Biases: 128
- BatchNorm: (scale + shift) for 128 channels: 2 × 128 = 256
- Parameters in conv2: 131072 + 128 + 256 = 131456
3. **conv3:**
- Weights: 4 × 4 × 128 × 256 = **524288**
- Biases: **256**
- BatchNorm: (scale + shift) for 256 channels: 2 × 256 = **512**
- Parameters in conv3: **524288 + 256 + 512 = 525056**
3. conv3:
- Weights: 4 × 4 × 128 × 256 = 524288
- Biases: 256
- BatchNorm: (scale + shift) for 256 channels: 2 × 256 = 512
- Parameters in conv3: 524288 + 256 + 512= 525056
4. **conv4:**
- Weights: 4 × 4 × 256 × 512 = **2097152**
- Biases: **512**
- BatchNorm: (scale + shift) for 512 channels: 2 × 512 = **1024**
- Parameters in conv4: **2097152 + 512 + 1024 = 2098688**
4. conv4:
- Weights: 4 × 4 × 256 × 512 = 2097152
- Biases: 512
- BatchNorm: (scale + shift) for 512 channels: 2 × 512 = 1024
- Parameters in conv4: 2097152+512+1,024=2098688
5. **out:**
- Weights: 4 × 4 × 512 × 1 = **8192**
- Biases: **1**
- Parameters in out: **8192 + 1 = 8193**
5. out:
- Weights: 4 × 4 × 512 × 1 = 8192
- Biases: 1
- Parameters in out: 8192 + 1=8193
**Total Learnable Parameters:**
**Total Learnable Parameters**
**6208 + 131456 + 525056 + 2098688 + 8193 = 2,769,601**
**6,208 + 131,456 + 525,056 + 2,098,688 + 8,193 = 2,769,601**
---
### **Results Comparison: 100 vs. 200 Epochs**
......@@ -186,10 +192,10 @@ question : how many learnable parameters this neural network has ?:
- **Overfitting Issue:** Generalization is poor beyond 100 epochs.
- **Limited Dataset Size (378 Images):** Restricts model’s diversity and quality.
#### Example image of training set at 100 and 200 epochs:
#### **Example image of training set at 100 and 200 epochs:**
![Example image for training set at 100 and 200 epochs](images/facades_trainingset_100_200_.png)
#### Example images of evaluation set at 100 and 200 epochs:
#### **Example images of evaluation set at 100 and 200 epochs:**
![Example images of evaluation set at 100 and 200 epochs](images/facades_valset_100_200_.png)
## Part 3: Diffusion Models
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment