Skip to content
Snippets Groups Projects
Commit 7084a5ff authored by pierre-cau's avatar pierre-cau
Browse files

Update README.md

parent 167ea132
No related branches found
No related tags found
No related merge requests found
......@@ -93,9 +93,9 @@ Using all this equation, I have coded some methods in the `mlp.py`file to train
Thus, for `split_factor=0.9`, `d_h=64`, `learning_rate=0.1` and `num_epoch=100`, we obtain the following curves :
![mlp_split_0.1](results\mlp_1.png)
![mlp_split_0.1](/results/mlp_1.png)
>Here we observe that the accuracy is increasing one epoch at a time but still. At the end, we reach about 27% of both test and train accuracy. This means that the algorithm is neither underfitted nor overfitted. Both the loss and train accuracy seem to be quite stable at the end which implies that the algorithm have finished its learning.
Nonetheless the accuracy is still very low and the algorithm can easily diverge due to exponential values, encountering overlfows. To counter this phenomenon, I made the choice to initialize the weights as tiny as possible but still randomly choosed. I have also introduced so `np.clip` methods and used an epsilon to respectively avoid overflows and dividing by zero.
![lr_comparaison](results\learning_rate_comparaison.png)
![lr_comparaison](/results/learning_rate_comparaison.png)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment