diff --git a/TD3 Vision Transformer.ipynb b/TD3 Vision Transformer.ipynb index 8fd731a942eddf504d2bf71b6b6563297676b1eb..604b41b30cba813210627a238532e4de7e8c607e 100644 --- a/TD3 Vision Transformer.ipynb +++ b/TD3 Vision Transformer.ipynb @@ -1,586 +1,714 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# TD3: Vision Transformer (ViT)\n", - "\n", - "In this TD, you must modify this notebook to complete the code (**# TO DO comments**) and complete the **proposed experiments**. To do this,\n", - "\n", - "1. Fork this repository\n", - "2. Clone your forked repository on your local computer\n", - "3. Add your code and answer the questions\n", - "4. Commit and push regularly\n", - "\n", - "**The last commit is due on Wednesday, January 8, 2025**. Later commits will not be taken into account.\n", - "\n", - "As the computation is heavy, particularly during training, we encourage you to use a GPU. If your laptob is not equiped, you may use one of these remote jupyter servers, where you can select the execution on GPU :\n", - "\n", - "1) [jupyter.mi90.ec-lyon.fr](https://jupyter.mi90.ec-lyon.fr/)\n", - "\n", - "This server is accessible within the campus network. If outside, you need to use a VPN. Before executing the notebook, select the kernel \"Python PyTorch\" to run it on GPU and have access to PyTorch module.\n", - "\n", - "2) [Google Colaboratory](https://colab.research.google.com/)\n", - "\n", - "Before executing the notebook, select the execution on GPU : \"Exécution\" Menu -> \"Modifier le type d'exécution\" and select \"T4 GPU\". " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Goal of the TD\n", - "\n", - "Transformers have been introduced by [Vaswani et al. in 2017](https://arxiv.org/abs/1706.03762) in the context of NLP (Natural Language Processing), and particulary for Machine Translation.\n", - "\n", - "Its great success has led to its adaptation to various applications, including image classification. In this trend, [Dosovitskiy et al. in 2020](https://arxiv.org/abs/2010.11929) have proposed Vision Transformers (ViT) that we will study and implement from scratch in this TD.\n", - "\n", - "The principle is illustrated in the following picture from this paper.\n", - "\n", - "\n", - "\n", - "First, an input image is “cut” into sub-images equally sized.\n", - "\n", - "Each such sub-image goes through a linear embedding. From then, each sub-image becomes a one-dimensional vector.\n", - "\n", - "A positional embedding is then added to these vectors (tokens). The positional embedding allows the network to know where each sub-image is positioned originally in the image. Without this information, the network would not be able to know where each such image would be placed, leading to potentially wrong predictions.\n", - "\n", - "These tokens are then passed, together with a special classification token, to the transformer encoders blocks, were each is composed of : A Layer Normalization (LN), followed by a Multi-head Self Attention (MSA) and a residual connection. Then a second LN, a Multi-Layer Perceptron (MLP), and again a residual connection. These blocks are connected back-to-back.\n", - "\n", - "Finally, a classification MLP head is used for the final classification only on the special classification token, which by the end of this process has global information about the picture.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Implementation of the ViT model" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "First, we import the required modules." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "wEmbaOA4Okuo", - "outputId": "2bf953f2-2a18-44f3-c537-db8c6d58d4ee" - }, - "outputs": [], - "source": [ - "# Import modules\n", - "import numpy as np\n", - "import torch\n", - "import torch.nn as nn\n", - "from torch.nn import CrossEntropyLoss\n", - "from torch.optim import Adam\n", - "from torch.utils.data import DataLoader\n", - "from torchvision.datasets.mnist import MNIST\n", - "from torchvision.transforms import ToTensor" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For this first experiment, we will use the MNIST dataset that contains 28x28 binary pixels images of hand-written digits ([0–9])." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Load data\n", - "transform = ToTensor()\n", - "\n", - "train_set = MNIST(\n", - " root=\"datasets\", train=True, download=True, transform=transform\n", - ")\n", - "test_set = MNIST(\n", - " root=\"datasets\", train=False, download=True, transform=transform\n", - ")\n", - "\n", - "train_loader = DataLoader(train_set, shuffle=True, batch_size=128)\n", - "test_loader = DataLoader(test_set, shuffle=False, batch_size=128)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### \"Patchification\"\n", - "The transformer encoder was originally developed with sequence data in mind, such as English sentences. However, as an image is not a sequence, we need to “sequencify” an image. To do this, we break it into multiple sub-images and map each sub-image to a vector.\n", - "\n", - "We do so by simply reshaping our input, which has size (N, C, H, W), where N is the batch size, C the number of channels and (H,W) the image dimension. In the case of MNIST, dimensions are (N, 1, 28, 28). The target dimension is (N, #Patches, Patch dimensionality), where the dimensionality of a patch is adjusted accordingly.\n", - "\n", - "In this example, we break each (1, 28, 28) into 7x7 patches (hence, each of size 4x4). That is, we are going to obtain 7x7=49 sub-images out of a single image.\n", - "\n", - "Thus, we reshape input (N, 1, 28, 28) to (N, PxP, C x H/P x W/P) = (N, 49, 16)\n", - "\n", - "Notice that, while each patch is a picture of size 1x4x4, we flatten it to a 16-dimensional vector. Also, in this case, we only had a single color channel. If we had multiple color channels, those would also have been flattened into the vector." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "fxhHKKDFOoHp" - }, - "outputs": [], - "source": [ - "def patchify(images, n_patches):\n", - " n, c, h, w = images.shape\n", - "\n", - " assert h == w, \"Patchify method is implemented for square images only\"\n", - "\n", - " patches = torch.zeros(n, n_patches**2, h * w * c // n_patches**2)\n", - " patch_size = h // n_patches\n", - "\n", - " for idx, image in enumerate(images):\n", - " for i in range(n_patches):\n", - " for j in range(n_patches):\n", - " patch = image[\n", - " :,\n", - " i * patch_size : (i + 1) * patch_size,\n", - " j * patch_size : (j + 1) * patch_size,\n", - " ]\n", - " patches[idx, i * n_patches + j] = patch.flatten()\n", - " return patches" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Linear embedding\n", - "\n", - "Now that we have our flattened patches, we can map each of them through a Linear mapping. While each patch was a 4x4=16 dimensional vector, the linear mapping can map to any arbitrary vector size. Thus, we will use for this a parameter `hidden_d` for \"hidden dimension\".\n", - "\n", - "In this example, we will use a hidden dimension of 8, but in principle, any number can be put here. We will thus be mapping each 16-dimensional patch to an 8-dimensional patch.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Positional encoding\n", - "\n", - "Positional encoding allows the model to understand where each patch would be placed in the original image. While it is theoretically possible to learn such positional embeddings, previous work by [Vaswani et al. in 2017](https://arxiv.org/abs/1706.03762) suggests that we can just add sines and cosines waves.\n", - "\n", - "In particular, positional encoding adds high-frequency values to the first dimensions and lower-frequency values to the latter dimensions.\n", - "\n", - "In each sequence, for token i we add to its j-th coordinate the following value:\n", - "\n", - ".\n", - "\n", - "This positional embedding is a function of the number of elements in the sequence and the dimensionality of each element. Thus, it is always a 2-dimensional tensor or “rectangle”.\n", - "\n", - "Here is a simple function that, given the number of tokens and the dimensionality of each of them, outputs a matrix where each coordinate (i,j) is the value to be added to token i in dimension j.\n", - "\n", - "This positional encoding is added to our model after the linear mapping and the addition of the class token." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "bOaI_5SrO4vB" - }, - "outputs": [], - "source": [ - "def get_positional_embeddings(sequence_length, d):\n", - " result = torch.ones(sequence_length, d)\n", - " for i in range(sequence_length):\n", - " for j in range(d):\n", - " result[i][j] = (\n", - " np.sin(i / (10000 ** (j / d)))\n", - " if j % 2 == 0\n", - " else np.cos(i / (10000 ** ((j - 1) / d)))\n", - " )\n", - " return result" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Multi-Head Self-Attention\n", - "\n", - "The objective is now that, for a single image, each patch has to be updated based on some similarity measure with the other patches. We do so by linearly mapping each patch (that is now an 8-dimensional vector in our example) to 3 distinct vectors: q, k, and v (query, key, value).\n", - "\n", - "Then, for a single patch, we are going to compute the dot product between its q vector with all of the k vectors, divide by the square root of the dimensionality of these vectors (sqrt(8)), softmax these so-called attention cues, and finally multiply each attention cue with the v vectors associated with the different k vectors and sum all up.\n", - "\n", - "In this way, each patch assumes a new value that is based on its similarity (after the linear mapping to q, k, and v) with other patches. This whole procedure, however, is carried out H times on H sub-vectors of our current 8-dimensional patches, where H is the number of Heads. \n", - "\n", - "Once all results are obtained, they are concatenated together. Finally, the result is passed through a linear layer (for good measure).\n", - "\n", - "The intuitive idea behind attention is that it allows modeling the relationship between the inputs. What makes a ‘0’ a zero are not the individual pixel values, but how they relate to each other.\n", - "\n", - "This is implemented in the MSA class:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "CIoyR-QsOruC" - }, - "outputs": [], - "source": [ - "class MSA(nn.Module):\n", - " def __init__(self, d, n_heads=2):\n", - " super().__init__()\n", - " self.d = d\n", - " self.n_heads = n_heads\n", - "\n", - " assert d % n_heads == 0, f\"Can't divide dimension {d} into {n_heads} heads\"\n", - "\n", - " d_head = int(d / n_heads)\n", - " self.q_mappings = nn.ModuleList(\n", - " [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]\n", - " )\n", - " self.k_mappings = nn.ModuleList(\n", - " [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]\n", - " )\n", - " self.v_mappings = nn.ModuleList(\n", - " [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]\n", - " )\n", - " self.d_head = d_head\n", - " self.softmax = nn.Softmax(dim=-1)\n", - "\n", - " def forward(self, sequences):\n", - " # Sequences has shape (N, seq_length, token_dim)\n", - " # We go into shape (N, seq_length, n_heads, token_dim / n_heads)\n", - " # And come back to (N, seq_length, item_dim) (through concatenation)\n", - " result = []\n", - " for sequence in sequences:\n", - " seq_result = []\n", - " for head in range(self.n_heads):\n", - " q_mapping = self.q_mappings[head]\n", - " k_mapping = self.k_mappings[head]\n", - " v_mapping = self.v_mappings[head]\n", - "\n", - " seq = sequence[:, head * self.d_head : (head + 1) * self.d_head]\n", - " q, k, v = q_mapping(seq), k_mapping(seq), v_mapping(seq)\n", - "\n", - " #\n", - " # TO DO: implement attention computation\n", - " #\n", - " attention = \n", - "\n", - " seq_result.append(attention)\n", - " \n", - " result.append(torch.hstack(seq_result))\n", - " return torch.cat([torch.unsqueeze(r, dim=0) for r in result])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Notice that, for each head, we create distinct Q, K, and V mapping functions (square matrices of size 4x4 in our example).\n", - "\n", - "Since our inputs will be sequences of size (N, 50, 8), and we only use 2 heads, we will at some point have an (N, 50, 2, 4) tensor, use a nn.Linear(4, 4) module on it, and then come back, after concatenation, to an (N, 50, 8) tensor.\n", - "\n", - "Also notice that using loops is not the most efficient way to compute the multi-head self-attention, but it makes the code much clearer for learning." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Transformer Encoder Blocks\n", - "\n", - "The next step is to create the transformer encoder block class.\n", - "\n", - "Layer normalization (LN) is a popular block that, given an input, subtracts its mean and divides by the standard deviation. It is applied to the last dimension only. We can thus make each of our 50x8 matrices (representing a single sequence) have mean 0 and std 1. After we run our (N, 50, 8) tensor through LN, we still get the same dimensionality.\n", - "\n", - "Also, We will be using residual connection that consists in adding the original input to the result of some computation. This, intuitively, allows a network to become more powerful while also preserving the set of possible functions that the model can approximate.\n", - "\n", - "We will add a residual connection that will add our original (N, 50, 8) tensor to the (N, 50, 8) obtained after LN and MSA. \n", - "\n", - "Next is to add a simple residual connection between what we already have and what we get after passing the current tensor through another LN and an MLP. The MLP is composed of two layers, where the hidden layer typically is four times as big (this is a parameter).\n", - "\n", - "The transformer encoder block class (which will be a component of the future ViT class) is thus as follows:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "sv8wnTx4OwP7" - }, - "outputs": [], - "source": [ - "class ViTBlock(nn.Module):\n", - " def __init__(self, hidden_d, n_heads, mlp_ratio=4):\n", - " super().__init__()\n", - " self.hidden_d = hidden_d\n", - " self.n_heads = n_heads\n", - "\n", - " self.norm1 = nn.LayerNorm(hidden_d)\n", - " self.mhsa = MSA(hidden_d, n_heads)\n", - " self.norm2 = nn.LayerNorm(hidden_d)\n", - " self.mlp = nn.Sequential(\n", - " nn.Linear(hidden_d, mlp_ratio * hidden_d),\n", - " nn.GELU(),\n", - " nn.Linear(mlp_ratio * hidden_d, hidden_d),\n", - " )\n", - "\n", - " def forward(self, x):\n", - " #\n", - " # TO DO: implement the forward pass\n", - " #\n", - " out = \n", - " \n", - " return out" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### ViT model\n", - "\n", - "Now that the encoder block is ready, we just need to insert it in our bigger ViT model which is responsible for patchifying before the transformer blocks, and carrying out the classification after.\n", - "\n", - "To help classification, we will use an additional **classification token** to the input sequence. This is a special token that we add to our model that has the role of capturing information about the other tokens. This will happen with the MSA block. When information about all other tokens will be present here, we will be able to classify the image using only this special token. The initial value of the special token (the one fed to the transformer encoder) is a parameter of the model that needs to be learned.\n", - "\n", - "Thus, we will add a parameter to our model and convert our (N, 49, 8) tokens tensor to an (N, 50, 8) tensor (we add the special token to each sequence).\n", - "\n", - "We could have an arbitrary number of transformer blocks. In this example, to keep it simple, I will use only 2. We also add a parameter to know how many heads does each encoder block will use.\n", - "\n", - "Finally, we can extract just the classification token (first token) out of our N sequences, and use each token to get N classifications.\n", - "\n", - "Since we decided that each token is an 8-dimensional vector, and since we have 10 possible digits, we can implement the classification MLP as a simple 8x10 matrix, activated with the SoftMax function.\n", - "\n", - "The output of our model shoud be an (N, 10) tensor. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "8Na9BTgnOy3o" - }, - "outputs": [], - "source": [ - "class ViT(nn.Module):\n", - " def __init__(self, chw, n_patches=7, n_blocks=2, hidden_d=8, n_heads=2, out_d=10):\n", - " # Super constructor\n", - " super().__init__()\n", - "\n", - " # Attributes\n", - " self.chw = chw # ( C , H , W )\n", - " self.n_patches = n_patches\n", - " self.n_blocks = n_blocks\n", - " self.n_heads = n_heads\n", - " self.hidden_d = hidden_d\n", - "\n", - " # Input and patches sizes\n", - " assert (\n", - " chw[1] % n_patches == 0\n", - " ), \"Input shape not entirely divisible by number of patches\"\n", - " assert (\n", - " chw[2] % n_patches == 0\n", - " ), \"Input shape not entirely divisible by number of patches\"\n", - " self.patch_size = (chw[1] / n_patches, chw[2] / n_patches)\n", - "\n", - " # 1) Linear mapper\n", - " self.input_d = int(chw[0] * self.patch_size[0] * self.patch_size[1])\n", - " self.linear_mapper = nn.Linear(self.input_d, self.hidden_d)\n", - "\n", - " # 2) Learnable classification token\n", - " self.class_token = nn.Parameter(torch.rand(1, self.hidden_d))\n", - "\n", - " # 3) Positional embedding\n", - " self.register_buffer(\n", - " \"positional_embeddings\",\n", - " get_positional_embeddings(n_patches**2 + 1, hidden_d),\n", - " persistent=False,\n", - " )\n", - "\n", - " # 4) Transformer encoder blocks\n", - " self.blocks = nn.ModuleList(\n", - " [ViTBlock(hidden_d, n_heads) for _ in range(n_blocks)]\n", - " )\n", - "\n", - " # 5) Classification MLPk\n", - " self.mlp = nn.Sequential(nn.Linear(self.hidden_d, out_d), nn.Softmax(dim=-1))\n", - "\n", - " def forward(self, images):\n", - "\n", - " #\n", - " # TO DO: implement the forward pass\n", - " #\n", - "\n", - " # Dividing images into patches\n", - " n, c, h, w = images.shape\n", - " patches = \n", - "\n", - " # Running linear layer tokenization\n", - " # Map the vector corresponding to each patch to the hidden size dimension\n", - " tokens = \n", - "\n", - " # Adding classification token to the tokens\n", - " tokens = torch.cat((self.class_token.expand(n, 1, -1), tokens), dim=1)\n", - "\n", - " # Adding positional embedding\n", - " out = tokens + self.positional_embeddings.repeat(n, 1, 1)\n", - "\n", - " # Transformer Blocks\n", - " for block in self.blocks:\n", - " out = \n", - "\n", - " # Getting the classification token only\n", - " out = \n", - "\n", - " # Map to output dimension, output category distribution\n", - " out = \n", - "\n", - " return out " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### ViT training\n", - "\n", - "The ViT model being built, the next step is to train it on the MNIST dataset." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "First, we initialize the model and the hyperparameters." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", - "print(\n", - " \"Using device: \",\n", - " device,\n", - " f\"({torch.cuda.get_device_name(device)})\" if torch.cuda.is_available() else \"\",\n", - ")\n", - "\n", - "model = ViT(\n", - " (1, 28, 28), n_patches=7, n_blocks=2, hidden_d=8, n_heads=2, out_d=10\n", - ").to(device)\n", - "\n", - "N_EPOCHS = 5\n", - "LR = 0.005" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Training of the ViT model:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "optimizer = Adam(model.parameters(), lr=LR)\n", - "criterion = CrossEntropyLoss()\n", - "for epoch in range(N_EPOCHS):\n", - " train_loss = 0.0\n", - " for batch in train_loader:\n", - " x, y = batch\n", - " x, y = x.to(device), y.to(device)\n", - " y_hat = model(x)\n", - " loss = criterion(y_hat, y)\n", - "\n", - " train_loss += loss.detach().cpu().item() / len(train_loader)\n", - "\n", - " #\n", - " # TO DO : implement the gradients computation and the parameters update\n", - " #\n", - " \n", - "\n", - " print(f\"Epoch {epoch + 1}/{N_EPOCHS} loss: {train_loss:.2f}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### ViT test\n", - "\n", - "Finally, let's test the trained model." - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# TD3: Vision Transformer (ViT)\n", + "\n", + "In this TD, you must modify this notebook to complete the code (**# TO DO comments**) and complete the **proposed experiments**. To do this,\n", + "\n", + "1. Fork this repository\n", + "2. Clone your forked repository on your local computer\n", + "3. Add your code and answer the questions\n", + "4. Commit and push regularly\n", + "\n", + "**The last commit is due on Wednesday, January 8, 2025**. Later commits will not be taken into account.\n", + "\n", + "As the computation is heavy, particularly during training, we encourage you to use a GPU. If your laptob is not equiped, you may use one of these remote jupyter servers, where you can select the execution on GPU :\n", + "\n", + "1) [jupyter.mi90.ec-lyon.fr](https://jupyter.mi90.ec-lyon.fr/)\n", + "\n", + "This server is accessible within the campus network. If outside, you need to use a VPN. Before executing the notebook, select the kernel \"Python PyTorch\" to run it on GPU and have access to PyTorch module.\n", + "\n", + "2) [Google Colaboratory](https://colab.research.google.com/)\n", + "\n", + "Before executing the notebook, select the execution on GPU : \"Exécution\" Menu -> \"Modifier le type d'exécution\" and select \"T4 GPU\". " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Goal of the TD\n", + "\n", + "Transformers have been introduced by [Vaswani et al. in 2017](https://arxiv.org/abs/1706.03762) in the context of NLP (Natural Language Processing), and particulary for Machine Translation.\n", + "\n", + "Its great success has led to its adaptation to various applications, including image classification. In this trend, [Dosovitskiy et al. in 2020](https://arxiv.org/abs/2010.11929) have proposed Vision Transformers (ViT) that we will study and implement from scratch in this TD.\n", + "\n", + "The principle is illustrated in the following picture from this paper.\n", + "\n", + "\n", + "\n", + "First, an input image is “cut” into sub-images equally sized.\n", + "\n", + "Each such sub-image goes through a linear embedding. From then, each sub-image becomes a one-dimensional vector.\n", + "\n", + "A positional embedding is then added to these vectors (tokens). The positional embedding allows the network to know where each sub-image is positioned originally in the image. Without this information, the network would not be able to know where each such image would be placed, leading to potentially wrong predictions.\n", + "\n", + "These tokens are then passed, together with a special classification token, to the transformer encoders blocks, were each is composed of : A Layer Normalization (LN), followed by a Multi-head Self Attention (MSA) and a residual connection. Then a second LN, a Multi-Layer Perceptron (MLP), and again a residual connection. These blocks are connected back-to-back.\n", + "\n", + "Finally, a classification MLP head is used for the final classification only on the special classification token, which by the end of this process has global information about the picture.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. Input images is cut in equally sized sub-images\n", + "2. Each image becomes a one-dimensional vector with `linear embedding`\n", + "3. Positional embedding is added to these vectors (tokens) -> The positional embedding indicates the original position of that sub-image\n", + "4. The tokens are passed with a \"special classification token\" to the `transformer encoders blocks`\n", + " * Each `transformer encoders block` is composed of: A `Layer Normalization (LN)`, followed by a `Multi-head Self Attention (MSA)` and a ``residual connection\n", + " * Then, a `second LN`, a `Multi-Ñayer Perceptron (MLP), and again a `residual connection` -> These blocks are connected \"back-to-back\"\n", + "5. Finally, a `classification MLP head`is udes for the final classification only on the \"special classification token\", which by the end of the process has global information about the picture." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Implementation of the ViT model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, we import the required modules." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "wEmbaOA4Okuo", + "outputId": "2bf953f2-2a18-44f3-c537-db8c6d58d4ee" + }, + "outputs": [], + "source": [ + "# Import modules\n", + "import numpy as np\n", + "import torch\n", + "import torch.nn as nn\n", + "from torch.nn import CrossEntropyLoss\n", + "from torch.optim import Adam\n", + "from torch.utils.data import DataLoader\n", + "from torchvision.datasets.mnist import MNIST\n", + "from torchvision.transforms import ToTensor" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For this first experiment, we will use the MNIST dataset that contains 28x28 binary pixels images of hand-written digits ([0–9])." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [], + "source": [ + "# Load data\n", + "transform = ToTensor()\n", + "\n", + "train_set = MNIST(\n", + " root=\"datasets\", train=True, download=True, transform=transform\n", + ")\n", + "test_set = MNIST(\n", + " root=\"datasets\", train=False, download=True, transform=transform\n", + ")\n", + "\n", + "train_loader = DataLoader(train_set, shuffle=True, batch_size=128)\n", + "test_loader = DataLoader(test_set, shuffle=False, batch_size=128)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### \"Patchification\"\n", + "The transformer encoder was originally developed with sequence data in mind, such as English sentences. However, as an image is not a sequence, we need to “sequencify” an image. To do this, we break it into multiple sub-images and map each sub-image to a vector.\n", + "\n", + "We do so by simply reshaping our input, which has size (N, C, H, W), where N is the batch size, C the number of channels and (H,W) the image dimension. In the case of MNIST, dimensions are (N, 1, 28, 28). The target dimension is (N, #Patches, Patch dimensionality), where the dimensionality of a patch is adjusted accordingly.\n", + "\n", + "In this example, we break each (1, 28, 28) into 7x7 patches (hence, each of size 4x4). That is, we are going to obtain 7x7=49 sub-images out of a single image.\n", + "\n", + "Thus, we reshape input (N, 1, 28, 28) to (N, PxP, C x H/P x W/P) = (N, 49, 16)\n", + "\n", + "Notice that, while each patch is a picture of size 1x4x4, we flatten it to a 16-dimensional vector. Also, in this case, we only had a single color channel. If we had multiple color channels, those would also have been flattened into the vector." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "id": "fxhHKKDFOoHp" + }, + "outputs": [], + "source": [ + "def patchify(images, n_patches):\n", + " n, c, h, w = images.shape\n", + "\n", + " assert h == w, \"Patchify method is implemented for square images only\"\n", + "\n", + " patches = torch.zeros(n, n_patches**2, h * w * c // n_patches**2)\n", + " patch_size = h // n_patches\n", + "\n", + " for idx, image in enumerate(images):\n", + " for i in range(n_patches):\n", + " for j in range(n_patches):\n", + " patch = image[\n", + " :,\n", + " i * patch_size : (i + 1) * patch_size,\n", + " j * patch_size : (j + 1) * patch_size,\n", + " ]\n", + " patches[idx, i * n_patches + j] = patch.flatten()\n", + " return patches" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Original tensor images with shape: (2, 1, 28, 28)\n", + "* n = 2 images in the batch\n", + "* c = 1 color channel (gray scale)\n", + "* h = 28 pixels\n", + "* w = 28 pixels\n", + "\n", + "If `n_patches` = 7:\n", + "* 7 * 7 = 49 patches per image\n", + "* Each patch has (1 * 4 * 4) = 16 elements\n", + "\n", + "The tensor `patches` has shape (2, 49, 16)\n", + "* n = 2: Number of images\n", + "* P * P = 49: patches per image\n", + "* H/P * W/P = 28/7 * 28/7 = 4 * 4 = 16: Dimensionality of each patch" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Linear embedding\n", + "\n", + "Now that we have our flattened patches, we can map each of them through a Linear mapping. While each patch was a 4x4=16 dimensional vector, the linear mapping can map to any arbitrary vector size. Thus, we will use for this a parameter `hidden_d` for \"hidden dimension\".\n", + "\n", + "In this example, we will use a hidden dimension of 8, but in principle, any number can be put here. We will thus be mapping each 16-dimensional patch to an 8-dimensional patch.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Positional encoding\n", + "\n", + "Positional encoding allows the model to understand where each patch would be placed in the original image. While it is theoretically possible to learn such positional embeddings, previous work by [Vaswani et al. in 2017](https://arxiv.org/abs/1706.03762) suggests that we can just add sines and cosines waves.\n", + "\n", + "In particular, positional encoding adds high-frequency values to the first dimensions and lower-frequency values to the latter dimensions.\n", + "\n", + "In each sequence, for token i we add to its j-th coordinate the following value:\n", + "\n", + ".\n", + "\n", + "This positional embedding is a function of the number of elements in the sequence and the dimensionality of each element. Thus, it is always a 2-dimensional tensor or “rectangle”.\n", + "\n", + "Here is a simple function that, given the number of tokens and the dimensionality of each of them, outputs a matrix where each coordinate (i,j) is the value to be added to token i in dimension j.\n", + "\n", + "This positional encoding is added to our model after the linear mapping and the addition of the class token." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "id": "bOaI_5SrO4vB" + }, + "outputs": [], + "source": [ + "def get_positional_embeddings(sequence_length, d):\n", + " result = torch.ones(sequence_length, d)\n", + " for i in range(sequence_length):\n", + " for j in range(d):\n", + " result[i][j] = (\n", + " np.sin(i / (10000 ** (j / d)))\n", + " if j % 2 == 0\n", + " else np.cos(i / (10000 ** ((j - 1) / d)))\n", + " )\n", + " return result" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The tensor `patches` has shape (2, 49, 16) of :\n", + "* 49 patches (for an image divided by 7 * 7)\n", + "* Each patch has un d_emb.dim = 16 (4 * 4)\n", + "\n", + "1. `i` from N = 49\n", + "2. `j`from d_emb.dim = 16\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Multi-Head Self-Attention\n", + "\n", + "The objective is now that, for a single image, each patch has to be updated based on some similarity measure with the other patches. We do so by linearly mapping each patch (that is now an 8-dimensional vector in our example) to 3 distinct vectors: q, k, and v (query, key, value).\n", + "\n", + "Then, for a single patch, we are going to compute the dot product between its q vector with all of the k vectors, divide by the square root of the dimensionality of these vectors (sqrt(8)), softmax these so-called attention cues, and finally multiply each attention cue with the v vectors associated with the different k vectors and sum all up.\n", + "\n", + "In this way, each patch assumes a new value that is based on its similarity (after the linear mapping to q, k, and v) with other patches. This whole procedure, however, is carried out H times on H sub-vectors of our current 8-dimensional patches, where H is the number of Heads. \n", + "\n", + "Once all results are obtained, they are concatenated together. Finally, the result is passed through a linear layer (for good measure).\n", + "\n", + "The intuitive idea behind attention is that it allows modeling the relationship between the inputs. What makes a ‘0’ a zero are not the individual pixel values, but how they relate to each other.\n", + "\n", + "This is implemented in the MSA class:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "def print_info(data, name):\n", + " print(f'Printing {name} data info:')\n", + " print(f'Data: {data}')\n", + " print(f'Type: {type(data)}')\n", + " print(f'Shape: {data.shape}')" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "id": "CIoyR-QsOruC" + }, + "outputs": [], + "source": [ + "class MSA(nn.Module):\n", + " def __init__(self, d, n_heads=2):\n", + " super().__init__()\n", + " self.d = d\n", + " self.n_heads = n_heads\n", + "\n", + " assert d % n_heads == 0, f\"Can't divide dimension {d} into {n_heads} heads\"\n", + "\n", + " d_head = int(d / n_heads)\n", + " self.q_mappings = nn.ModuleList(\n", + " [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]\n", + " )\n", + " self.k_mappings = nn.ModuleList(\n", + " [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]\n", + " )\n", + " self.v_mappings = nn.ModuleList(\n", + " [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]\n", + " )\n", + " self.d_head = d_head\n", + " self.softmax = nn.Softmax(dim=-1)\n", + "\n", + " def forward(self, sequences):\n", + " # Sequences has shape (N, seq_length, token_dim)\n", + " # We go into shape (N, seq_length, n_heads, token_dim / n_heads)\n", + " # And come back to (N, seq_length, item_dim) (through concatenation)\n", + " result = []\n", + " for sequence in sequences:\n", + " seq_result = []\n", + " for head in range(self.n_heads):\n", + " q_mapping = self.q_mappings[head]\n", + " k_mapping = self.k_mappings[head]\n", + " v_mapping = self.v_mappings[head]\n", + "\n", + " seq = sequence[:, head * self.d_head : (head + 1) * self.d_head]\n", + " q, k, v = q_mapping(seq), k_mapping(seq), v_mapping(seq)\n", + "\n", + " #\n", + " # TO DO: implement attention computation\n", + " #\n", + "\n", + " # Calculate attention\n", + " # Step 1: Calcul the dot product between q and k\n", + " attention_scores = torch.matmul(q, k.transpose(-2, -1))\n", + "\n", + " # Step 2: Normalization by the square root of the dimension\n", + " attention_scores = attention_scores / (self.d_head ** 5)\n", + " \n", + " attention_weights = self.softmax(attention_scores)\n", + "\n", + " attention = torch.matmul(attention_weights, v)\n", + "\n", + " seq_result.append(attention)\n", + " \n", + " result.append(torch.hstack(seq_result))\n", + " return torch.cat([torch.unsqueeze(r, dim=0) for r in result])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that, for each head, we create distinct Q, K, and V mapping functions (square matrices of size 4x4 in our example).\n", + "\n", + "Since our inputs will be sequences of size (N, 50, 8), and we only use 2 heads, we will at some point have an (N, 50, 2, 4) tensor, use a nn.Linear(4, 4) module on it, and then come back, after concatenation, to an (N, 50, 8) tensor.\n", + "\n", + "Also notice that using loops is not the most efficient way to compute the multi-head self-attention, but it makes the code much clearer for learning." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Transformer Encoder Blocks\n", + "\n", + "The next step is to create the transformer encoder block class.\n", + "\n", + "Layer normalization (LN) is a popular block that, given an input, subtracts its mean and divides by the standard deviation. It is applied to the last dimension only. We can thus make each of our 50x8 matrices (representing a single sequence) have mean 0 and std 1. After we run our (N, 50, 8) tensor through LN, we still get the same dimensionality.\n", + "\n", + "Also, We will be using residual connection that consists in adding the original input to the result of some computation. This, intuitively, allows a network to become more powerful while also preserving the set of possible functions that the model can approximate.\n", + "\n", + "We will add a residual connection that will add our original (N, 50, 8) tensor to the (N, 50, 8) obtained after LN and MSA. \n", + "\n", + "Next is to add a simple residual connection between what we already have and what we get after passing the current tensor through another LN and an MLP. The MLP is composed of two layers, where the hidden layer typically is four times as big (this is a parameter).\n", + "\n", + "The transformer encoder block class (which will be a component of the future ViT class) is thus as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": { + "id": "sv8wnTx4OwP7" + }, + "outputs": [], + "source": [ + "class ViTBlock(nn.Module):\n", + " def __init__(self, hidden_d, n_heads, mlp_ratio=4):\n", + " super().__init__()\n", + " self.hidden_d = hidden_d\n", + " self.n_heads = n_heads\n", + "\n", + " self.norm1 = nn.LayerNorm(hidden_d)\n", + " self.mhsa = MSA(hidden_d, n_heads)\n", + " self.norm2 = nn.LayerNorm(hidden_d)\n", + " self.mlp = nn.Sequential(\n", + " nn.Linear(hidden_d, mlp_ratio * hidden_d),\n", + " nn.GELU(),\n", + " nn.Linear(mlp_ratio * hidden_d, hidden_d),\n", + " )\n", + "\n", + " def forward(self, x):\n", + " # Step 1: \n", + " out = x + self.mhsa(self.norm1(x))\n", + "\n", + " # Step 2:\n", + " # residual = x\n", + " # x = self.norm2(x)\n", + " # x = self.mlp(x)\n", + " # x += residual\n", + " out = out + self.mlp(self.norm2(out))\n", + "\n", + " return out" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ViT model\n", + "\n", + "Now that the encoder block is ready, we just need to insert it in our bigger ViT model which is responsible for patchifying before the transformer blocks, and carrying out the classification after.\n", + "\n", + "To help classification, we will use an additional **classification token** to the input sequence. This is a special token that we add to our model that has the role of capturing information about the other tokens. This will happen with the MSA block. When information about all other tokens will be present here, we will be able to classify the image using only this special token. The initial value of the special token (the one fed to the transformer encoder) is a parameter of the model that needs to be learned.\n", + "\n", + "Thus, we will add a parameter to our model and convert our (N, 49, 8) tokens tensor to an (N, 50, 8) tensor (we add the special token to each sequence).\n", + "\n", + "We could have an arbitrary number of transformer blocks. In this example, to keep it simple, I will use only 2. We also add a parameter to know how many heads does each encoder block will use.\n", + "\n", + "Finally, we can extract just the classification token (first token) out of our N sequences, and use each token to get N classifications.\n", + "\n", + "Since we decided that each token is an 8-dimensional vector, and since we have 10 possible digits, we can implement the classification MLP as a simple 8x10 matrix, activated with the SoftMax function.\n", + "\n", + "The output of our model shoud be an (N, 10) tensor. " + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": { + "id": "8Na9BTgnOy3o" + }, + "outputs": [], + "source": [ + "class ViT(nn.Module):\n", + " def __init__(self, chw, n_patches=7, n_blocks=2, hidden_d=8, n_heads=2, out_d=10):\n", + " # Super constructor\n", + " super().__init__()\n", + "\n", + " # Attributes\n", + " self.chw = chw # ( C , H , W )\n", + " self.n_patches = n_patches\n", + " self.n_blocks = n_blocks\n", + " self.n_heads = n_heads\n", + " self.hidden_d = hidden_d\n", + "\n", + " # Input and patches sizes\n", + " assert (\n", + " chw[1] % n_patches == 0\n", + " ), \"Input shape not entirely divisible by number of patches\"\n", + " assert (\n", + " chw[2] % n_patches == 0\n", + " ), \"Input shape not entirely divisible by number of patches\"\n", + " self.patch_size = (chw[1] / n_patches, chw[2] / n_patches)\n", + "\n", + " # 1) Linear mapper\n", + " self.input_d = int(chw[0] * self.patch_size[0] * self.patch_size[1])\n", + " self.linear_mapper = nn.Linear(self.input_d, self.hidden_d)\n", + "\n", + " # 2) Learnable classification token\n", + " self.class_token = nn.Parameter(torch.rand(1, self.hidden_d))\n", + "\n", + " # 3) Positional embedding\n", + " self.register_buffer(\n", + " \"positional_embeddings\",\n", + " get_positional_embeddings(n_patches**2 + 1, hidden_d),\n", + " persistent=False,\n", + " )\n", + "\n", + " # 4) Transformer encoder blocks\n", + " self.blocks = nn.ModuleList(\n", + " [ViTBlock(hidden_d, n_heads) for _ in range(n_blocks)]\n", + " )\n", + "\n", + " # 5) Classification MLPk\n", + " self.mlp = nn.Sequential(nn.Linear(self.hidden_d, out_d), nn.Softmax(dim=-1))\n", + "\n", + " def forward(self, images):\n", + "\n", + " #\n", + " # TO DO: implement the forward pass\n", + " #\n", + "\n", + " # Dividing images into patches\n", + " n, c, h, w = images.shape\n", + " patches = patchify(images, self.n_patches)\n", + "\n", + " # Running linear layer tokenization\n", + " # Map the vector corresponding to each patch to the hidden size dimension\n", + " tokens = self.linear_mapper(patches)\n", + "\n", + " # Adding classification token to the tokens\n", + " tokens = torch.cat((self.class_token.expand(n, 1, -1), tokens), dim=1)\n", + "\n", + " # Adding positional embedding\n", + " out = tokens + self.positional_embeddings.repeat(n, 1, 1)\n", + "\n", + " # Transformer Blocks\n", + " for block in self.blocks:\n", + " out = block(out)\n", + "\n", + " # Getting the classification token only\n", + " out = out[:, 0]\n", + "\n", + " # Map to output dimension, output category distribution\n", + " out = self.mlp(out)\n", + "\n", + " return out " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ViT training\n", + "\n", + "The ViT model being built, the next step is to train it on the MNIST dataset." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, we initialize the model and the hyperparameters." + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "h55dVGGhOaPI" - }, - "outputs": [], - "source": [ - "with torch.no_grad():\n", - " correct, total = 0, 0\n", - " test_loss = 0.0\n", - " for batch in test_loader:\n", - " x, y = batch\n", - " x, y = x.to(device), y.to(device)\n", - "\n", - " #\n", - " # TO DO: implement the computation of the loss and the accuracy (correct)\n", - " # \n", - " \n", - "\n", - " print(f\"Test loss: {test_loss:.2f}\")\n", - " print(f\"Test accuracy: {correct / total * 100:.2f}%\")\n" - ] - }, + "name": "stdout", + "output_type": "stream", + "text": [ + "Using device: cpu \n" + ] + } + ], + "source": [ + "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", + "# device = torch.device(\"mps\")\n", + "print(\n", + " \"Using device: \",\n", + " device,\n", + " f\"({torch.cuda.get_device_name(device)})\" if torch.cuda.is_available() else \"\",\n", + ")\n", + "\n", + "model = ViT(\n", + " (1, 28, 28), n_patches=7, n_blocks=2, hidden_d=8, n_heads=2, out_d=10\n", + ").to(device)\n", + "\n", + "N_EPOCHS = 5\n", + "LR = 0.005" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Training of the ViT model:" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [ { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Further experiments\n", - "\n", - "1. Adapt the code to apply the ViT model on CIFAR dataset.\n", - "2. Make use of a validation set to evaluate overfitting.\n", - "3. Evaluate the model with a dimension of 16 for the tokens and 4 encoder blocks." - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "Epoch 1/5 loss: 2.17\n", + "Epoch 2/5 loss: 2.03\n", + "Epoch 3/5 loss: 1.89\n", + "Epoch 4/5 loss: 1.80\n", + "Epoch 5/5 loss: 1.78\n" + ] } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "gpuType": "T4", - "provenance": [] - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - }, - "language_info": { - "name": "python" + ], + "source": [ + "optimizer = Adam(model.parameters(), lr=LR)\n", + "criterion = CrossEntropyLoss()\n", + "\n", + "for epoch in range(N_EPOCHS):\n", + " train_loss = 0.0\n", + " for batch in train_loader:\n", + " x, y = batch\n", + " x, y = x.to(device), y.to(device)\n", + " y_hat = model(x)\n", + " loss = criterion(y_hat, y)\n", + "\n", + " train_loss += loss.detach().cpu().item() / len(train_loader)\n", + "\n", + " # Clean previus gradients -> Zero gradients\n", + " optimizer.zero_grad()\n", + "\n", + " # Backward pass\n", + " loss.backward()\n", + "\n", + " # Update the model's parameters\n", + " optimizer.step() \n", + "\n", + " print(f\"Epoch {epoch + 1}/{N_EPOCHS} loss: {train_loss:.2f}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ViT test\n", + "\n", + "Finally, let's test the trained model." + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": { + "id": "h55dVGGhOaPI" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Test loss: 1.73\n", + "Test accuracy: 73.40%\n" + ] } + ], + "source": [ + "with torch.no_grad():\n", + " correct, total = 0, 0\n", + " test_loss = 0.0\n", + " for batch in test_loader:\n", + " x, y = batch\n", + " x, y = x.to(device), y.to(device)\n", + "\n", + " #\n", + " # TO DO: implement the computation of the loss and the accuracy (correct)\n", + " # \n", + "\n", + " # Forward pass\n", + " y_hat = model(x)\n", + "\n", + " # Calculate the loss\n", + " loss = criterion(y_hat, y)\n", + " test_loss += loss.detach().cpu().item() / len(test_loader)\n", + "\n", + " # Calculate the accuracy\n", + " _, predicted = torch.max(y_hat, 1)\n", + " total += y.size(0)\n", + " correct += (predicted == y).sum().item() \n", + "\n", + " print(f\"Test loss: {test_loss:.2f}\")\n", + " print(f\"Test accuracy: {correct / total * 100:.2f}%\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Further experiments\n", + "\n", + "1. Adapt the code to apply the ViT model on CIFAR dataset.\n", + "2. Make use of a validation set to evaluate overfitting.\n", + "3. Evaluate the model with a dimension of 16 for the tokens and 4 encoder blocks." + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "gpuType": "T4", + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" }, - "nbformat": 4, - "nbformat_minor": 0 + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 0 } diff --git a/datasets/MNIST/raw/t10k-images-idx3-ubyte b/datasets/MNIST/raw/t10k-images-idx3-ubyte new file mode 100644 index 0000000000000000000000000000000000000000..1170b2cae98de7a524b163fcc379ac8f00925b12 Binary files /dev/null and b/datasets/MNIST/raw/t10k-images-idx3-ubyte differ diff --git a/datasets/MNIST/raw/t10k-images-idx3-ubyte.gz b/datasets/MNIST/raw/t10k-images-idx3-ubyte.gz new file mode 100644 index 0000000000000000000000000000000000000000..5ace8ea93f8d2a3741f4d267954e2ad37e1b3a39 Binary files /dev/null and b/datasets/MNIST/raw/t10k-images-idx3-ubyte.gz differ diff --git a/datasets/MNIST/raw/t10k-labels-idx1-ubyte b/datasets/MNIST/raw/t10k-labels-idx1-ubyte new file mode 100644 index 0000000000000000000000000000000000000000..d1c3a970612bbd2df47a3c0697f82bd394abc450 Binary files /dev/null and b/datasets/MNIST/raw/t10k-labels-idx1-ubyte differ diff --git a/datasets/MNIST/raw/t10k-labels-idx1-ubyte.gz b/datasets/MNIST/raw/t10k-labels-idx1-ubyte.gz new file mode 100644 index 0000000000000000000000000000000000000000..a7e141541c1d08d3f2ed01eae03e644f9e2fd0c5 Binary files /dev/null and b/datasets/MNIST/raw/t10k-labels-idx1-ubyte.gz differ diff --git a/datasets/MNIST/raw/train-images-idx3-ubyte b/datasets/MNIST/raw/train-images-idx3-ubyte new file mode 100644 index 0000000000000000000000000000000000000000..bbce27659e0fc2b7ed2a64c127849380a477099b Binary files /dev/null and b/datasets/MNIST/raw/train-images-idx3-ubyte differ diff --git a/datasets/MNIST/raw/train-images-idx3-ubyte.gz b/datasets/MNIST/raw/train-images-idx3-ubyte.gz new file mode 100644 index 0000000000000000000000000000000000000000..b50e4b6bccdebde3d57f575c7fbeb24bec277f10 Binary files /dev/null and b/datasets/MNIST/raw/train-images-idx3-ubyte.gz differ diff --git a/datasets/MNIST/raw/train-labels-idx1-ubyte b/datasets/MNIST/raw/train-labels-idx1-ubyte new file mode 100644 index 0000000000000000000000000000000000000000..d6b4c5db3b52063d543fb397aede09aba0dc5234 Binary files /dev/null and b/datasets/MNIST/raw/train-labels-idx1-ubyte differ diff --git a/datasets/MNIST/raw/train-labels-idx1-ubyte.gz b/datasets/MNIST/raw/train-labels-idx1-ubyte.gz new file mode 100644 index 0000000000000000000000000000000000000000..707a576bb523304d5b674de436c0779d77b7d480 Binary files /dev/null and b/datasets/MNIST/raw/train-labels-idx1-ubyte.gz differ