Predictive control & model-based reinforcement learning

Predictive control is ubiquitous in industry, with applications ranging from autonomous driving to large scale interconnected power systems. The first half of the class will explore the connection between model-based reinforcement learning (RL) and predictive control for continuous time problems.

The class will first recall basic ideas from approximate dynamic programming and model-based RL for systems with discrete states. Then, we will discuss optimal control problems for systems with continuous state and action spaces. Finally, for these continuous control problems, we will show how to learn from data the three components that are used in predictive control design:

  1. a model which describes the evolution of the system;
  2. a safe set of states (and an associated control policy) from which the control task can be safely executed; and
  3. a value function which represents the cumulative closed-loop cost from a given state from the safe set.

Instructor

This part of the course will be taught by Ugo Rosolia (urosolia@caltech.edu).

Homeworks

# Date set Date due Resources
1 4/01 4/08 hw1.zip
2 4/08 4/15 hw2.zip

Lectures

# Date Subject Resources
    Main Lectures  
1 3/30 Discrete MDPs pdf / vid
2 4/01 Optimal Control pdf / vid
3 4/06 Model Predictive Control pdf / vid
4 4/08 Learning MPC pdf / vid / supp
5 4/13 Model Learning in MPC pdf / vid
6 4/15 Planning Under Uncertainty and Project Ideas pdf / vid
    Guest Lectures  
13 5/11 Joe Marino pdf / vid
16 5/20 Guanya Shi pdf / vid
17 5/25 Roberto Calandra pdf / vid

Reading material

Discrete MDPs:

Model Predictive Control:

Iterative Learning MPC:

Learning for control:

Deep model-based RL:

Robust planning in MPC