From ff85123939f98c471a4fce79590812f68013bae8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Quentin=20GALLOU=C3=89DEC?= <gallouedec.quentin@gmail.com> Date: Thu, 27 Oct 2022 16:38:21 +0200 Subject: [PATCH] Better format --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index 514c68e..69ca442 100644 --- a/README.md +++ b/README.md @@ -177,9 +177,7 @@ We also need that the last activation layer of the network to be a softmax layer - `labels_train` a vector of size `batch_size`, and - `learning_rate` the learning rate, - that perform one gradient descent step using a binary cross-entropy loss. - We admit that $`\frac{\partial C}{\partial Z^{(2)}} = A^{(2)} - Y`$, where $`Y`$ is a one-hot vector encoding the label. - The function must return: + that perform one gradient descent step using a binary cross-entropy loss. We admit that $`\frac{\partial C}{\partial Z^{(2)}} = A^{(2)} - Y`$, where $`Y`$ is a one-hot vector encoding the label. The function must return: - `w1`, `b1`, `w2` and `b2` the updated weights and biases of the network, - `loss` the loss, for monitoring purpose. 13. Write the function `train_mlp` taking as parameters: -- GitLab