Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • loestrei/mso_3_4-td1
  • edelland/mso_3_4-td1
  • schneidl/mso_3_4-td1
  • epaganel/mso_3_4-td1
  • asennevi/armand-senneville-mso-3-4-td-1
  • hchauvin/mso_3_4-td1
  • mbabay/mso_3_4-td1
  • ochaufou/mso_3_4-td1
  • cgerest/hands-on-rl
  • robertr/mso_3_4-td1
  • kmajdi/mso_3_4-td1
  • jseksik/hands-on-rl
  • coulonj/mso_3_4-td1
  • tdesgreys/mso_3_4-td1
14 results
Select Git revision
Show changes
Commits on Source (16)
LOSS.png

73.5 KiB

......@@ -2,209 +2,32 @@
MSO 3.4 Apprentissage Automatique
#
Due to local installing issues the code was run on google cloud, google colab platform
In this hands-on project, we will first implement a simple RL algorithm and apply it to solve the CartPole-v1 environment. Once we become familiar with the basic workflow, we will learn to use various tools for machine learning model training, monitoring, and sharing, by applying these tools to train a robotic arm.
the notebook presents how the code was run, you can check to view the steps that it took. The previous technical error is solved
# REINFORCE algorithm
## To be handed in
To implement the REINFORCE algorithm on the CartPole environement, two important python modules are used in order to create instances of two different elements:
This work must be done individually. The expected output is a repository named `hands-on-rl` on https://gitlab.ec-lyon.fr.
We assume that `git` is installed, and that you are familiar with the basic `git` commands. (Optionnaly, you can use GitHub Desktop.)
We also assume that you have access to the [ECL GitLab](https://gitlab.ec-lyon.fr/). If necessary, please consult [this tutorial](https://gitlab.ec-lyon.fr/edelland/inf_tc2/-/blob/main/Tutoriel_gitlab/tutoriel_gitlab.md).
gym : module to provide (and charge) the environment. In this case, CartPole-v1 (better described in the following exercise). Gym is a standard API for reinforcement learning, and a diverse collection of reference environments and it is developed by OpenAI.
Your repository must contain a `README.md` file that explains **briefly** the successive steps of the project. It must be private, so you need to add your teacher as "developer" member.
torch : module used to create the neural network, which is used as the policy to select actions in the CartPole environment. It uses the REINFORCE algorithm to update the policy parameters. It provides a seamless integration with CUDA, which has enabled the execution of GPU-accelerated computations. It is a very extensive machine learning framework, was originally developed by Meta AI and now part of the Linux Foundation umbrella.
Throughout the subject, you will find a 🛠 symbol indicating that a specific production is expected.
The file LOSS.pnj show how the policy loss error is optimized throughout the iterations. We notice that the loss oscilates considerably during the optimization process. It show noisy and rapid variations until the end of the process where the loss seems to decrease significantly since its peaks at that point are the lowest among the previous ones.
The last commit is due before 11:59 pm on March 5, 2024. Subsequent commits will not be considered.
# Advantage Actor-Critic (A2C) algorithm
> ⚠️ **Warning**
> Ensure that you only commit the files that are requested. For example, your directory should not contain the generated `.zip` files, nor the `runs` folder... At the end, your repository must contain one `README.md`, three python scripts, and optionally image files for the plots.
In order to explore the A2C algorithm, the Stable-Baselines3 module is used. It provides implementation of state-of-the-art reinforcement learning (RL) algorithms, including DQN, A2C, PPO, and others. It also has built-in support for multi-threading and parallelism (e.g. CUDA), which helps speed up the learning process.
## Before you start
Link to the trained model : https://huggingface.co/Karim-20/a2c_cartpole/blob/main/ECL-TD-RL1-a2c_cartpole.zip
Make sure you know the basics of Reinforcement Learning. In case of need, you can refer to the [introduction of the Hugging Face RL course](https://huggingface.co/blog/deep-rl-intro).
# PandaReachJointsDense-v2
## Introduction to Gym
requires the module panda-gym, a set of Reinforcement Learning (RL) environments for the Franka Emika Panda robot, integrated with OpenAI Gym. It is a continuous control task where the goal is to reach a target position and orientation with the end-effector of the robot arm while avoiding obstacles. The state of the environment includes the joint angles and velocities of the robot arm, as well as the position and orientation of the end-effector.
[Gym](https://gymnasium.farama.org/) is a framework for developing and evaluating reinforcement learning environments. It offers various environments, including classic control and toy text scenarios, to test RL algorithms.
link to wandb training evolution : https://wandb.ai/aiblackbelt/sb3-panda-reach/runs/ihcoeovn?workspace=user-aiblackbelt
### Installation
We recommend to use Python virtual environnements to install the required modules : https://docs.python.org/3/library/venv.html
First, install Pytorch : https://pytorch.org/get-started/locally.
Then install the following modules :
```sh
pip install gym==0.26.2
```
Install also pyglet for the rendering.
```sh
pip install pyglet==2.0.10
```
If needed
```sh
pip install pygame==2.5.2
```
```sh
pip install PyQt5
```
### Usage
Here is an example of how to use Gym to solve the `CartPole-v1` environment [Documentation](https://gymnasium.farama.org/environments/classic_control/cart_pole/):
```python
import gym
# Create the environment
env = gym.make("CartPole-v1", render_mode="human")
# Reset the environment and get the initial observation
observation = env.reset()
for _ in range(100):
# Select a random action from the action space
action = env.action_space.sample()
# Apply the action to the environment
# Returns next observation, reward, done signal (indicating
# if the episode has ended), and an additional info dictionary
observation, reward, terminated, truncated, info = env.step(action)
# Render the environment to visualize the agent's behavior
env.render()
if terminated:
# Terminated before max step
break
env.close()
```
## REINFORCE
The REINFORCE algorithm (also known as Vanilla Policy Gradient) is a policy gradient method that optimizes the policy directly using gradient descent. The following is the pseudocode of the REINFORCE algorithm:
```txt
Setup the CartPole environment
Setup the agent as a simple neural network with:
- One fully connected layer with 128 units and ReLU activation followed by a dropout layer
- One fully connected layer followed by softmax activation
Repeat 500 times:
Reset the environment
Reset the buffer
Repeat until the end of the episode:
Compute action probabilities
Sample the action based on the probabilities and store its probability in the buffer
Step the environment with the action
Compute and store in the buffer the return using gamma=0.99
Normalize the return
Compute the policy loss as -sum(log(prob) * return)
Update the policy using an Adam optimizer and a learning rate of 5e-3
```
To learn more about REINFORCE, you can refer to [this unit](https://huggingface.co/learn/deep-rl-course/unit4/introduction).
> 🛠 **To be handed in**
> Use PyTorch to implement REINFORCE and solve the CartPole environement. Share the code in `reinforce_cartpole.py`, and share a plot showing the total reward accross episodes in the `README.md`.
## Familiarization with a complete RL pipeline: Application to training a robotic arm
In this section, you will use the Stable-Baselines3 package to train a robotic arm using RL. You'll get familiar with several widely-used tools for training, monitoring and sharing machine learning models.
### Get familiar with Stable-Baselines3
Stable-Baselines3 (SB3) is a high-level RL library that provides various algorithms and integrated tools to easily train and test reinforcement learning models.
#### Installation
```sh
pip install stable-baselines3
pip install moviepy
```
#### Usage
Use the [Stable-Baselines3 documentation](https://stable-baselines3.readthedocs.io/en/master/) to implement the code to solve the CartPole environment with the Advantage Actor-Critic (A2C) algorithm.
> 🛠 **To be handed in**
> Store the code in `a2c_sb3_cartpole.py`. Unless otherwise stated, you'll work upon this file for the next sections.
### Get familiar with Hugging Face Hub
Hugging Face Hub is a platform for easy sharing and versioning of trained machine learning models. With Hugging Face Hub, you can quickly and easily share your models with others and make them usable through the API. For example, see the trained A2C agent for CartPole: https://huggingface.co/sb3/a2c-CartPole-v1. Hugging Face Hub provides an API to download and upload SB3 models.
#### Installation of `huggingface_sb3`
```sh
pip install huggingface-sb3==2.3.1
```
#### Upload the model on the Hub
Follow the [Hugging Face Hub documentation](https://huggingface.co/docs/hub/stable-baselines3) to upload the previously learned model to the Hub.
> 🛠 **To be handed in**
> Link the trained model in the `README.md` file.
> 📝 **Note**
> [RL-Zoo3](https://stable-baselines3.readthedocs.io/en/master/guide/rl_zoo.html) provides more advanced features to save hyperparameters, generate renderings and metrics. Feel free to try them.
### Get familiar with Weights & Biases
Weights & Biases (W&B) is a tool for machine learning experiment management. With W&B, you can track and compare your experiments, visualize your model training and performance.
#### Installation
You'll need to install both `wand` and `tensorboar`.
```shell
pip install wandb tensorboard
```
Use the documentation of [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) and [Weights & Biases](https://docs.wandb.ai/guides/integrations/stable-baselines-3) to track the CartPole training. Make the run public.
🛠 Share the link of the wandb run in the `README.md` file.
> ⚠️ **Warning**
> Make sure to make the run public!
### Full workflow with panda-gym
[Panda-gym](https://github.com/qgallouedec/panda-gym) is a collection of environments for robotic simulation and control. It provides a range of challenges for training robotic agents in a simulated environment. In this section, you will get familiar with one of the environments provided by panda-gym, the `PandaReachJointsDense-v3`. The objective is to learn how to reach any point in 3D space by directly controlling the robot's articulations.
#### Installation
```shell
pip install panda-gym==3.0.7
```
#### Train, track, and share
Use the Stable-Baselines3 package to train A2C model on the `PandaReachJointsDense-v2` environment. 500k timesteps should be enough. Track the environment with Weights & Biases. Once the training is over, upload the trained model on the Hub.
> 🛠 **To be handed in**
> Share all the code in `a2c_sb3_panda_reach.py`. Share the link of the wandb run and the trained model in the `README.md` file.
## Contribute
This tutorial may contain errors, inaccuracies, typos or areas for improvement. Feel free to contribute to its improvement by opening an issue.
## Author
Quentin Gallouédec
Updates by Léo Schneider, Emmanuel Dellandréa
## License
MIT
link to the trained model : https://huggingface.co/Karim-20/a2c_cartpole/blob/main/ECL-TD-RL1-a2c_panda_reach.zip
This diff is collapsed.
import gymnasium as gym
import numpy as np
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3 import A2C
from huggingface_sb3 import push_to_hub
from huggingface_hub import login
print(f"{gym.__version__=}")
env = gym.make("CartPole-v1", render_mode="rgb_array")
model = A2C("MlpPolicy", env, verbose=1)
def evaluate(model, num_episodes=100, deterministic=True):
vec_env = model.get_env()
all_episode_rewards = []
for i in range(num_episodes):
episode_rewards = []
done = False
obs = vec_env.reset()
while not done:
# _states are only useful when using LSTM policies
action, _states = model.predict(obs, deterministic=deterministic)
# here, action, rewards and dones are arrays
# also note that the step only returns a 4-tuple, as the env that is returned
obs, reward, done, info = vec_env.step(action)
episode_rewards.append(reward)
all_episode_rewards.append(sum(episode_rewards))
mean_episode_reward = np.mean(all_episode_rewards)
print("Mean reward:", mean_episode_reward, "Num episodes:", num_episodes)
return mean_episode_reward
# Use a separate environement for evaluation
eval_env = gym.make("CartPole-v1", render_mode="rgb_array")
# Train the agent for 10000 steps
model.learn(total_timesteps=10_000)
# Evaluate the trained agent
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=100)
print(f"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}")
login(token="****************")
# Save the trained model
model.save("ECL-TD-RL1-a2c_cartpole.zip")
# Load the trained model
model = A2C.load("ECL-TD-RL1-a2c_cartpole.zip")
push_to_hub(
repo_id="Karim-20/a2c_cartpole",
filename="ECL-TD-RL1-a2c_cartpole.zip",
commit_message="Add cartepole-v1 environement, agent used to train is A2C"
)
### LIBRARIES
import gymnasium as gym
from stable_baselines3 import A2C
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.vec_env import DummyVecEnv, VecVideoRecorder
import wandb
from wandb.integration.sb3 import WandbCallback
from huggingface_sb3 import push_to_hub
import panda_gym
import os
from huggingface_hub import login
#dir_path = os.path.dirname(os.path.realpath(__file__))
#os.chdir(dir_path)
config = {
"policy_type": "MultiInputPolicy",
"total_timesteps": 250000,
"env_name": "PandaReachJointsDense-v3",
}
run = wandb.init(
project="sb3-panda-reach",
config=config,
sync_tensorboard=True, # auto-upload sb3's tensorboard metrics
monitor_gym=True, # auto-upload the videos of agents playing the game
save_code=True, # optional
)
def make_env():
env = gym.make(config["env_name"])
env = Monitor(env) # record stats such as returns
return env
env = DummyVecEnv([make_env])
# env = VecVideoRecorder(env, f"videos/{run.id}", record_video_trigger=lambda x: x % 2000 == 0, video_length=200)
model = A2C(config["policy_type"], env, verbose=1, tensorboard_log=f"runs/{run.id}")
model.learn(
total_timesteps=config["total_timesteps"],
callback=WandbCallback(
gradient_save_freq=100,
model_save_path=f"models/{run.id}",
verbose=2,
),
)
run.finish()
login(token="*********")
# Save the trained model
model.save("ECL-TD-RL1-a2c_panda_reach.zip")
# Load the trained model
model = A2C.load("ECL-TD-RL1-a2c_panda_reach.zip")
push_to_hub(
repo_id="Karim-20/a2c_cartpole",
filename="ECL-TD-RL1-a2c_panda_reach.zip",
commit_message="Add PandaReachJointsDense-v2 environement, agent used to train is A2C"
)
import gymnasium as gym
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categorical
import matplotlib.pyplot as plt
# Create the environment
env = gym.make("CartPole-v1", render_mode="human")
# Reset the environment and get the initial observation
observation = env.reset()
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
# Define the agent neural network model
class Policy(nn.Module):
def __init__(self, state_size, action_size, hidden_size=128):
super(Policy, self).__init__()
self.fc1 = nn.Linear(state_size, hidden_size)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(p=0.6) # Adjust dropout probability as needed
self.fc2 = nn.Linear(hidden_size, action_size)
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.dropout(x)
x = self.fc2(x)
return F.softmax(x)
policy_model = Policy(state_size, action_size)
optimizer = optim.Adam(policy_model.parameters(), lr=5e-3)
gamma = 0.99
episodes_rewards = []
for i in range(500):
# Reset the environment
# init buffers
observation, info = env.reset(seed=42)
episode_rewards = []
logarithmich_probabilities = []
terminated = False
# Render the environment to visualize the agent's behavior
env.render()
while terminated == False:
# Get action probabilities from the policy model
action_probabilities = policy_model(torch.tensor(observation, dtype=torch.float32))
action_distribution = Categorical(action_probabilities)
# Sample an action from the action distribution
action = action_distribution.sample()
logarithmich_probability = action_distribution.log_prob(action)
logarithmich_probabilities.append(logarithmich_probability)
print(int(action.item()))
# Take a step in the environment
#print(env.step(action.item()))
next_observation, reward, done, a, b = env.step(action.item())
episode_rewards.append(reward)
# Update observation
observation = next_observation
# Compute the return for the episode
returns = []
R = 0
for r in reversed(episode_rewards):
R = r + gamma * R
returns.insert(0, R)
# Compute the policy loss
policy_loss = torch.tensor([-loga_prob * R for loga_prob, R in zip(logarithmich_probabilities, returns)]).sum()
episodes_rewards += [-policy_loss]
# Update the policy model
optimizer.zero_grad()
policy_loss.backward()
optimizer.step()
env.close()
# Plot the policy loss against iterations
plt.plot([i for i in range(0,500)],episodes_rewards)
plt.xlabel('Iterations')
plt.ylabel('Policy Loss')
plt.title('Policy Loss vs. Iterations')
plt.show()