diff --git a/TD2 Deep Learning.ipynb b/TD2 Deep Learning.ipynb
index 5cc9e62627637e04b53d2a9253d17ef026355198..e598dc65c5d83157f8ef11f884ee6604d61c8e62 100644
--- a/TD2 Deep Learning.ipynb	
+++ b/TD2 Deep Learning.ipynb	
@@ -1592,9 +1592,9 @@
    "id": "b475c943",
    "metadata": {},
    "source": [
-    "<span style=\"color:green\"> This model with quantization also works for red wine and for dog</span>\n",
+    "<span style=\"color:red\"> This quantized model demonstrates its versatility by effectively handling both red wine and dog images.</span>\n",
     "\n",
-    "<span style=\"color:green\"> Finally, we try this exercise with the pretrained model ***Inception V3*** :</span>"
+    "<span style=\"color:red\"> Ultimately, we extend this evaluation to include the pretrained Inception V3 model.</span>"
    ]
   },
   {
@@ -1691,7 +1691,7 @@
    "id": "f5207c68",
    "metadata": {},
    "source": [
-    "<span style=\"color:green\"> Inception V3 model also works fine !</span>"
+    "<span style=\"color:red\"> The Inception V3 model also exhibits commendable performance.</span>"
    ]
   },
   {
@@ -2145,7 +2145,7 @@
     "Modify the code and add an \"eval_model\" function to allow\n",
     "the evaluation of the model on a test set (different from the learning and validation sets used during the learning phase). Study the results obtained.\n",
     "\n",
-    "<span style=\"color:green\"> We can see that with using images from the dataset, we got a 100% of accuracy. Futhermore, we can try to evaluate the model with foreign image of ants and bees.</span>\n",
+    "<span style=\"color:red\"> Upon utilizing images from the dataset, we achieved a perfect 100% accuracy. Additionally, we can extend our assessment by testing the model's performance with external images featuring ants and bees.</span>\n",
     "\n",
     "Now modify the code to replace the current classification layer with a set of two layers using a \"relu\" activation function for the middle layer, and the \"dropout\" mechanism for both layers. Renew the experiments and study the results obtained."
    ]
@@ -2469,7 +2469,7 @@
    "id": "163800c2",
    "metadata": {},
    "source": [
-    "<span style=\"color:green\"> We already have a perfect accuracy.</span>"
+    "<span style=\"color:red\"> Our accuracy has already reached perfection.</span>"
    ]
   },
   {
@@ -2808,7 +2808,7 @@
    "id": "efdf7c2c",
    "metadata": {},
    "source": [
-    "<span style=\"color:green\"> Here we can see the the quantization seems to be useless. in fact, it doesn't have a great impact on the size of the model. Moreover, we lose a lot of precision in the process. </span>\n"
+    "<span style=\"color:red\"> In this observation, it appears that quantization proves to be ineffective as it neither significantly reduces the model's size nor enhances its precision; instead, it results in a notable loss of accuracy. </span>\n"
    ]
   },
   {