"**This code is intentionally not commented. It is your responsibility to add all the necessary comments to ensure your proper understanding of the code.**\n",
"**This code is intentionally not commented. It is your responsibility to add all the necessary comments to ensure your proper understanding of the code.**\n",
"\n",
"\n",
"You might frequently rely on [Hugging Face’s documentation](https://huggingface.co/docs).\n",
"\n",
"\n",
"\n",
"---\n",
"---\n",
"\n",
"\n",
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### **_Deep Learning - Bsc Data Science for Responsible Business - Centrale Lyon_**
### **_Deep Learning - Bsc Data Science for Responsible Business - Centrale Lyon_**
2024-2025
2024-2025
Emmanuel Dellandréa
Emmanuel Dellandréa
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
# Practical Session 7 – Large Language Models
# Practical Session 7 – Large Language Models
The objective of this tutorial is to learn to work with LLM models for sentence generation and classification. The pretrained models and tokenizers will be obtained from the [Hugging Face platform](https://huggingface.co/).
The objective of this tutorial is to learn to work with LLM models for sentence generation and classification. The pretrained models and tokenizers will be obtained from the [Hugging Face platform](https://huggingface.co/).
This notebook contains 8 parts:
This notebook contains 8 parts:
1. Using a Hugging Face text generation model
1. Using a Hugging Face text generation model
2. Using Pipeline of Hugging Face for text classification
2. Using Pipeline of Hugging Face for text classification
3. Using Pipeline with a specific model and tokenizer of Hugging Face
3. Using Pipeline with a specific model and tokenizer of Hugging Face
4. Experimenting with models from Hugging Face
4. Experimenting with models from Hugging Face
5. Training a LLM for sentence classification using the **Trainer** class
5. Training a LLM for sentence classification using the **Trainer** class
6. Fine tuning a LLM model with a custom head
6. Fine tuning a LLM model with a custom head
7. Sharing a model on Hugging Face platform
7. Sharing a model on Hugging Face platform
8. Further experiments
8. Further experiments
Before going further into experiments, you work is to understand the provided code, that gives an overview of using LLM with Hugging Face.
Before going further into experiments, you work is to understand the provided code, that gives an overview of using LLM with Hugging Face.
**This code is intentionally not commented. It is your responsibility to add all the necessary comments to ensure your proper understanding of the code.**
**This code is intentionally not commented. It is your responsibility to add all the necessary comments to ensure your proper understanding of the code.**
You might frequently rely on [Hugging Face’s documentation](https://huggingface.co/docs).
---
---
As the computation can be heavy, particularly during training, we encourage you to use a GPU. If your laptob is not equiped, you may use one of these remote jupyter servers, where you can select the execution on GPU :
As the computation can be heavy, particularly during training, we encourage you to use a GPU. If your laptob is not equiped, you may use one of these remote jupyter servers, where you can select the execution on GPU :
This server is accessible within the campus network. If outside, you need to use a VPN. Before executing the notebook, select the kernel "Python PyTorch" to run it on GPU and have access to PyTorch module.
This server is accessible within the campus network. If outside, you need to use a VPN. Before executing the notebook, select the kernel "Python PyTorch" to run it on GPU and have access to PyTorch module.