Skip to content
Snippets Groups Projects
Commit 90e24bc1 authored by Bessac Sophie's avatar Bessac Sophie
Browse files

Update readme

parent da8b003d
No related branches found
No related tags found
No related merge requests found
......@@ -15,21 +15,7 @@ For the entire datasets (5 train batches and 1 test batch), we have the followin
Each batch is unpickled. All batches are concatenated into a matrix data and a list labels. They are then suffled and split to create training and test datasets.
### k-nearest neighbours
We develop a knn model to predict the image labels.
### Features
### Background
## Installation
## Requirements
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Author
Sophie BESSAC
## License
For open source projects, say how it is licensed.
### mlp
We use the binary cross entropy to train our model.
......@@ -142,6 +142,7 @@ def learn_once_cross_entropy(w1, b1, w2, b2, data, labels_train, learning_rate):
# Forward pass
a0 = data # the data are the input of the first layer
z1 = np.matmul(a0, w1) + b1 # input of the hidden layer
z1 = np.clip(z1, -1000, 1000)
a1 = 1 / (1 + np.exp(-z1)) # output of the hidden layer (sigmoid activation function)
z2 = np.matmul(a1, w2) + b2 # input of the output layer
a2 = np.exp(z2) / np.sum(np.exp(z2), axis=1, keepdims=True) # output of the output layer (softmax activation function)
......@@ -230,6 +231,7 @@ def train_mlp(w1, b1, w2, b2, data_train, labels_train, learning_rate, num_epoch
# Forward pass
a0 = data_train # the data are the input of the first layer
z1 = np.matmul(a0, w1) + b1 # input of the hidden layer
z1 = np.clip(z1, -1000, 1000)
a1 = 1 / (1 + np.exp(-z1)) # output of the hidden layer (sigmoid activation function)
z2 = np.matmul(a1, w2) + b2 # input of the output layer
a2 = np.exp(z2) / np.sum(np.exp(z2), axis=1, keepdims=True) # output of the output layer (softmax activation function)
......@@ -267,6 +269,7 @@ def test_mlp(w1, b1, w2, b2, data_test, labels_test):
# Forward pass
a0 = data_test # the data are the input of the first layer
z1 = np.matmul(a0, w1) + b1 # input of the hidden layer
z1 = np.clip(z1, -1000, 1000)
a1 = 1 / (1 + np.exp(-z1)) # output of the hidden layer (sigmoid activation function)
z2 = np.matmul(a1, w2) + b2 # input of the output layer
a2 = 1 / (1 + np.exp(-z2)) # output of the output layer (sigmoid activation function)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment