From 36065efdb84c423243d348d75b03086a4875dfc6 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Quentin=20GALLOU=C3=89DEC?= <gallouedec.quentin@gmail.com>
Date: Thu, 27 Oct 2022 17:05:33 +0200
Subject: [PATCH] Format

---
 README.md | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 69ca442..27a0e3d 100644
--- a/README.md
+++ b/README.md
@@ -171,13 +171,15 @@ one_hot(labels=[1 2 0]) = [[0 1 0]
 We also need that the last activation layer of the network to be a softmax layer.
 
 11. Write the function `one_hot` taking a (n)-D array as parameters and returning the corresponding (n+1)-D one-hot matrix.
-12.   Write the function `learn_once_cross_entropy` taking as parameters:
+12. Write the function `learn_once_cross_entropy` taking as parameters:
       - `w1`, `b1`, `w2` and `b2` the weights and biases of the network,
       - `data` a matrix of shape (`batch_size` x `d_in`),
       - `labels_train` a vector of size `batch_size`, and
       - `learning_rate` the learning rate,
 
-    that perform one gradient descent step using a binary cross-entropy loss. We admit that $`\frac{\partial C}{\partial Z^{(2)}} = A^{(2)} - Y`$, where $`Y`$ is a one-hot vector encoding the label. The function must return:
+    that perform one gradient descent step using a binary cross-entropy loss.
+    We admit that $`\frac{\partial C}{\partial Z^{(2)}} = A^{(2)} - Y`$, where $`Y`$ is a one-hot vector encoding the label.
+    The function must return:
       - `w1`, `b1`, `w2` and `b2` the updated weights and biases of the network,
       - `loss` the loss, for monitoring purpose.
 13. Write the function `train_mlp` taking as parameters:
-- 
GitLab