diff --git a/README.md b/README.md
index 0df47aac02c5acb42f1516fcee4202807c5408c9..cc072b60493a03529fec1faa3e3e41af552bba40 100644
--- a/README.md
+++ b/README.md
@@ -5,9 +5,10 @@ This project aims to implement an image classification program using two success
 ## CIFAR-10 Dataset
 
 The CIFAR-10 dataset is a commonly used database in computer vision for image classification. It consists of 60,000 color images of 32x32 pixels, distributed across 10 distinct classes, representing different objects or animals.
-
-![Semantic description of image](cifar.PNG)
 ### CIFAR-10 Dataset Classes:
+<img align="right" width="240" height="220" src="cifar.PNG"> 
+ 
+
 
 1. airplane
 2. automobile
@@ -48,26 +49,20 @@ A Python file named knn.py was created, including the following functions:
 ### Performance Study
 The effectiveness of the KNN algorithm was evaluated based on the number of neighbors (k) for `split_factor=0.9`.
 
-### Running the KNN Code
-
-1. Run the script to split data
-```bash
-import read_cifar as rc
-X, y = rc.read_cifar('data') 
-# Split the Dataset
-X_train, y_train, X_test, y_test = rc.split_dataset(X, y, split=0.9) 
-```
-2. Function to run knn
-```bash
-import knn
-knn.plot_KNN(X_train, y_train, X_test, y_test) 
-```
-3. Running the ANN Code
-
-```bash
-import mlp
-mlp.plot_ANN(X_train,y_train,X_test,y_test)
-```
+### Running the Code
+To execute the models, follow these steps in the terminal:
+bash
+# Ensure requirements are installed before running KNN or MLP
+pip install -r requirements.txt
+
+1. KNN Model:
+bash
+python knn.py
+
+2. MLP Model:
+bash
+python mlp.py 
+
 ## Results :
 ### Generating the Graph
 1. Results using KNN:
@@ -84,18 +79,18 @@ A graph showing the accuracy variation with the number of epochs was generated u
 ![Semantic description of image](Results/mlp.png)
 
 ## Analysis of KNN Results
-Unfortunately, the performance of the KNN algorithm was disappointing, with accuracy ranging between 0.33 and 0.34 for different values of k (up to k=20). Several reasons may explain these mixed results:
+Unfortunately, the performance of the KNN algorithm was disappointing, with accuracy ranging between 33% and 36% for different values of k (up to k=20). Several reasons may explain these mixed results:
 
-1. **High Dimensionality of Data**: CIFAR-10 dataset images are 32x32 pixels, resulting in high-dimensional data. This can make Euclidean distance less discriminative, affecting KNN's performance.
+1. *High Dimensionality of Data*: CIFAR-10 dataset images are 32x32 pixels, resulting in high-dimensional data. This can make Euclidean distance less discriminative, affecting KNN's performance.
 
-2. **Scale Sensitivity**: KNN is sensitive to different feature scales. Pixels in an image can have different values, and KNN may be influenced by these disparities.
+2. *Scale Sensitivity*: KNN is sensitive to different feature scales. Pixels in an image can have different values, and KNN may be influenced by these disparities.
 
-3. **Choice of k**: The choice of the number of neighbors (k) can significantly influence results. An inappropriate k value can lead to underestimation or overestimation of the model's complexity.
+3. *Choice of k*: The choice of the number of neighbors (k) can significantly influence results. An inappropriate k value can lead to underestimation or overestimation of the model's complexity.
 
-4. **Lack of Feature Abstraction**: KNN directly uses pixels as features. More advanced feature extraction techniques could improve performance
+4. *Lack of Feature Abstraction*: KNN directly uses pixels as features. More advanced feature extraction techniques could improve performance
 
  ## Analysis of ANN Results
-The deep learning algorithm (ANN) used for our dataset has relatively low performance, with test set accuracy plateauing around 0.098 over 100 epochs.
+The deep learning algorithm (ANN) used for our dataset has relatively low performance, with test set accuracy plateauing around 15% over 100 epochs.
 
 These results suggest that adjustments to certain aspects of the model, such as complexity, hyperparameters, or weight initialization, may be necessary to improve its ability to generalize to new data. Further exploration of these aspects could be beneficial in optimizing model performance.
 
@@ -106,4 +101,4 @@ These results suggest that adjustments to certain aspects of the model, such as
 Sara EL ALIMI
 
 ## Licence
-Ce projet est sous licence MIT.
+Ce projet est sous licence MIT.
\ No newline at end of file