diff --git a/TD2_Deep_Learning.ipynb b/TD2_Deep_Learning.ipynb index 29ef64ca6e00520af8ad33061400e51ce95613ad..82c52d57dc571ce279aa8ea15f5de3f9e50bb717 100644 --- a/TD2_Deep_Learning.ipynb +++ b/TD2_Deep_Learning.ipynb @@ -5,7 +5,7 @@ "id": "7edf7168", "metadata": {}, "source": [ - "# TD2: Deep learning" + "# Implementation of CNN-based AI Algorithms for Image Classification" ] }, { @@ -13,14 +13,8 @@ "id": "fbb8c8df", "metadata": {}, "source": [ - "In this TD, you must modify this notebook to answer the questions. To do this,\n", - "\n", - "1. Fork this repository\n", - "2. Clone your forked repository on your local computer\n", - "3. Answer the questions\n", - "4. Commit and push regularly\n", - "\n", - "The last commit is due on Sunday, December 1, 11:59 PM. Later commits will not be taken into account." + "Dataset used to train the model: CIFAR-10 :\n", + "https://www.cs.toronto.edu/~kriz/cifar.html" ] }, { @@ -160,9 +154,9 @@ "id": "23f266da", "metadata": {}, "source": [ - "## Exercise 1: CNN on CIFAR10\n", + "## Part 1: CNN on CIFAR10\n", "\n", - "The goal is to apply a Convolutional Neural Net (CNN) model on the CIFAR10 image dataset and test the accuracy of the model on the basis of image classification. Compare the Accuracy VS the neural network implemented during TD1.\n", + "The goal is to apply a Convolutional Neural Net (CNN) model on the CIFAR10 image dataset and test the accuracy of the model on the basis of image classification\n", "\n", "Have a look at the following documentation to be familiar with PyTorch.\n", "\n", @@ -959,7 +953,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - " ## Exercise 2: Quantization: try to compress the CNN to save space\n", + " ## Part 2: Quantization: try to compress the CNN to save space\n", " \n", " Quantization doc is available from https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamicThe Exercise is to quantize post training the above CNN model. Compare the size reduction and the impact on the classification accuracy The size of the model is simply the size of the file." ] @@ -1246,7 +1240,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - " ## Exercise 3: working with pre-trained models." + " ## Part 3: working with pre-trained models." ] }, { @@ -2134,7 +2128,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Exercise 4: Transfer Learning" + "## Final part: Transfer Learning" ] }, {