diff --git a/README.md b/README.md
index 675138947a63b194b83c2d1dc0b22e2252c9fcc9..4e7907417b54b5764b862482dbc10097e16b0e22 100644
--- a/README.md
+++ b/README.md
@@ -2,20 +2,24 @@
 
 ## Reinforce 
 
-The file reinforce_cartpole.py is composed of an agent (Neural Network) and the training of a model for the CartPole problem.
+The file [reinforce_cartpole.py](https://gitlab.ec-lyon.fr/mghelfi/reinforcement-learning/-/blob/main/reinforce_cartpole.py) is composed of an agent (Neural Network) and the training of a model for the CartPole problem.
 
 
 The graph of the evolution of the total rewards during the episodes is present in the file : image.png
 
+<p align="center">
+  <img src="image.png" width="350" title="hover text">
+
+</p>
 
 ## Stable-Baselines3
 
-The file a2c_sb3_cartpole.py contains a model to solve the CartPole problem using an Advantage Actor-Critic (A2C) algorithm with the Stable-Baselines3 library.
+The file [a2c_sb3_cartpole.py](https://gitlab.ec-lyon.fr/mghelfi/reinforcement-learning/-/blob/main/a2c_sb3_cartpole.py) contains a model to solve the CartPole problem using an Advantage Actor-Critic (A2C) algorithm with the Stable-Baselines3 library.
 
 ## Hugging Face Hub
 
-I uploaded my model on huggingface :
-https://huggingface.co/manonghelfi/a2c_cartpole/tree/main
+I uploaded my model on huggingface [here](https://huggingface.co/manonghelfi/a2c_cartpole/tree/main).
+
 
 With the following python commands: 
 
@@ -32,7 +36,7 @@ push_to_hub(
 After identifying with the command : `huggingface-cli login`
 
 ## Weights & Biases
-The run of the model is here : https://wandb.ai/ghelfi/cartpole-training/runs/06exlpbm
+The run of the model is [here](https://wandb.ai/ghelfi/cartpole-training/runs/06exlpbm).
 
 Realized with the code below: 
 ```