diff --git a/TD2_Deep_Learning.ipynb b/TD2_Deep_Learning.ipynb
index afff4d1c59351cc4335b705fa65916f1e0f98f8b..13294ff64f3b0f558293f18c84537757e1f5b3ee 100644
--- a/TD2_Deep_Learning.ipynb
+++ b/TD2_Deep_Learning.ipynb
@@ -692,6 +692,14 @@
         ")"
       ]
     },
+    {
+      "cell_type": "markdown",
+      "id": "df2b0014",
+      "metadata": {},
+      "source": [
+        "The accuracy is a lot better that the one of the neural network implemented in TD1 that was berely above 20%."
+      ]
+    },
     {
       "cell_type": "markdown",
       "id": "944991a2",
@@ -1062,6 +1070,14 @@
         ")"
       ]
     },
+    {
+      "cell_type": "markdown",
+      "id": "02200b5e",
+      "metadata": {},
+      "source": [
+        "The new model has a accuracy of 73% which is better that the previous model. It may be because of the dropout that allows the model to be more adapatable to new data."
+      ]
+    },
     {
       "cell_type": "markdown",
       "id": "bc381cf4",
@@ -1297,7 +1313,7 @@
       "id": "c37a1007",
       "metadata": {},
       "source": [
-        "The compararison between the test accuracy of each class and the overall test accurcay shows that there is not a significant impact of the use of a quantized model on the prediction. The quantized model is \"only\" **0.07%** (nb of correct prediction of the original model-nb of correct prediction of the quantized model/nb of sample) less accurate than the original model but it is 2330.946/659.806 = **3,5** smaller."
+        "The compararison between the test accuracy of each class and the overall test accurcay shows that there is not a significant impact of the use of a quantized model on the prediction. The quantized model is only **0.07%** (nb of correct prediction of the original model-nb of correct prediction of the quantized model/nb of sample) less accurate than the original model but it is 2330.946/659.806 = **3,5** smaller !"
       ]
     },
     {
@@ -1770,7 +1786,7 @@
       "id": "f435974d",
       "metadata": {},
       "source": [
-        "The quantized also predict the correct labels and is much faster than the original VGG model. It is also 3,03 (553439.178/182540.454) smaller.\n",
+        "The quantized also predict the correct labels and is much faster than the original VGG model. It is also **3,03** (553439.178/182540.454) smaller.\n",
         "\n",
         "Time of computation:\n",
         "- Original VGG: 6 min 21s\n",