Underactuated Robotics

Algorithms for Walking, Running, Swimming, Flying, and Manipulation

Russ Tedrake

© Russ Tedrake, 2022
Last modified .
How to cite these notes, use annotations, and give feedback.

Note: These are working notes used for a course being taught at MIT. They will be updated throughout the Spring 2022 semester. Lecture videos are available on YouTube.

Previous Chapter Table of contents Next Chapter

Output Feedback (aka Pixels-to-Torques)

In this chapter we will finally start considering systems of the form: \begin{gather*} \bx[n+1] = {\bf f}(\bx[n], \bu[n], \bw[n]) \\ \by[n] = {\bf g}(\bx[n], \bu[n], \bv[n]),\end{gather*} where most of these symbols have been described before, but we have now added $\by[n]$ as the output of the system, and $\bv[n]$ which is representing "measurement noise" and is typically the output of a random process. In other words, we'll finally start addressing the fact that we have to make decisions based on sensor measurements -- most of our discussions until now have tacitly assumed that we have access to the true state of the system for use in our feedback controllers (and that's already been a hard problem).

In some cases, we will see that the assumption of "full-state feedback" is not so bad -- we do have good tools for state estimation from raw sensor data. But even our best state estimation algorithms do add some dynamics to the system in order to filter out noisy measurements; if the time constants of these filters is near the time constant of our dynamics, then it becomes important that we include the dynamics of the estimator in our analysis of the closed-loop system.

In other cases, it's entirely too optimistic to design a controller assuming that we will have an estimate of the full state of the system. Some state variables might be completely unobservable, others might require specific "information-gathering" actions on the part of the controller.

For me, the problem of robot manipulation is one important application domain where more flexible approaches to output feedback become critically important. Imagine you are trying to design a controller for a robot that needs to button the buttons on your shirt. Our current tools would require us to first estimate the state of the shirt (how many degrees of freedom does my shirt have?); but certainly the full state of my shirt should not be required to button a single button. Or if you want to program a robot to make a salad -- what's the state of the salad? Do I really need to know the positions and velocities of every piece of lettuce in order to be successful? These questions are (finally) getting a lot of attention in the research community these days, under the umbrella of "learning state representations". But what does it mean to be a good state representation? There are a number of simple lessons from output feedback in control that can shed light on this fundamental question.

Background

The classical perspective

To some extent, this idea of calling out "output feedback" as an advanced topic is a relatively new problem. Before state-space and optimization-based approaches to control ushered in "modern control", we had "classical control". Classical control focused predominantly (though not exclusively) on linear time-invariant (LTI) systems, and made very heavy use of frequency-domain analysis (e.g. via the Fourier Transform/Laplace Transform). There are many excellent books on the subject; Hespanha09+Astrom10 are nice examples of modern treatments that start with state-space representations but also treat the frequency-domain perspective. "Pole placement" and "loop shaping" are some of the tools of this trade.

What's important for us to acknowledge here is that in classical control, basically everything was built around the idea of output feedback. The fundamental concept is the transfer function of a system, which is a input-to-output map (in frequency domain) that can completely characterize an LTI system. Core concepts like pole placement and loop shaping were fundamentally addressing the challenge of output feedback that we are discussing here. Sometimes I feel that, despite all of the things we've gained with modern, optimization-based control, I worry that we've lost something in terms of considering rich characterizations of closed-loop performance (rise time, dwell time, overshoot, ...) and perhaps even in practical robustness of our systems to unmodeled errors.

Add a few examples here that capture it.

From pixels to torques

Just like some of our oldest approaches to control were fundamentally solving an output feedback problem, some of our newest approaches to control are doing it, too. Deep learning has revolutionized computer vision, and "deep imitation learning" and "deep reinforcement learning" have been a recent source of many impressive demonstrations of control systems that can operate directly from pixels (e.g. consuming the output of a deep perception system), without explicitly representing nor estimating the full state of the system. Unfortunately, the success or failure of these methods are not yet well understood, and they often require a great deal of artisanal tuning and an embarrassing (sometimes prohibitive) amount of computation.

The synthesis of ideas between machine learning (both theoretical and applied) and control theory is one of the most exciting and productive frontiers for research today. I am highly optimistic that we will be able to uncover the underlying principles and help transition this budding field into a technology. I hope that summarizing some of the key lessons from control here can help.

Static Output Feedback

One of the extremely important almost unstated lessons from dynamic programming with additive costs and the Bellman equation is that the optimal policy can always be represented as a function $\bu^* = \pi^*(\bx).$ So far in these notes, we've assumed that the controller has direct access to the true state, $\bx$. In this chapter, we are finally removing that assumption. Now the controller only has direct access to the potentially noisy observations $\by$.

So the natural first question to ask might be, what happens if we write our policies now as a function, $\bu = \pi(\by)?$ This is known as "static" output feedback, in contrast to "dynamic" output feedback where the controller is not a static function, but is itself another input-output dynamical system. Unfortunately, in the general case it is not the case that optimal policies can be perfectly represented with static output feedback. But one can still try to solve an optimal control problem where we restrict our search to static policies; our goal will be to find the best controller in this class to minimize the cost.

A hardness result

We've already seen an example of a very simple linear control problem where the set of stabilizing feedback gains formed a disconnected set -- which is suggestive that it could be a difficult problem for optimization. For some other problems in control, we've been able to find a convex reparametrization.

Unfortunately, Blondel97 showed that the question of whether a stabilizing static output feedback $\bu = -\bK \by$ even exists for a given system of the form $$\dot\bx = \bA\bx + \bB\bu,\quad \by = \bC \bx,$$ is NP-hard. Many of the strongest results from $H_2$ and $H_\infty$ design, for instance, are limited to dynamic controllers that can effectively reconstruct the entire state.

Just because this problem is NP-hard doesn't mean we can't find good controllers in practice. Some of the recent results from reinforcement learning have reminded us of this. We should not expect an efficient globally optimal algorithm that works for every problem instance; but we should absolutely keep working on the problem. Perhaps the class of problems that our robots will actually encounter in the real world is a easier than this general case (the standard examples of bad cases in linear systems, e.g. with interleaved poles and zeros, do feel a bit contrived and unlikely to occur in practice).

Via policy search

Searching for the best controller within a parametric class of policies is generally referred to as policy search. If we do policy search on a class of static output feedback policies, how well does it perform? Of course, the answer depends on the particular governing equations (for instance, $\by = \bx$ is a perfectly reasonable output, and in this case the policy can be optimal).

Jack had another nice example in his poly search lecture: https://youtu.be/JhjROrZxBhM?t=1099 and "what goes wrong in output feedback?" https://youtu.be/JhjROrZxBhM?t=5066

Bilinear alternations with SOS, Policy search with SGD, ...

Observer-based Feedback

Luenberger Observer

Linear Quadratic Regulator w/ Gaussian Noise (LQG)

Partially-observable Markov Decision Processes

Defer most of the discussion to the state estimation chapter

Trajectory optimization with Iterative LQG

Russ Tedrake 11:28 AM so we have three broad approaches to searching for linear dynamical controllers (for linear dynamical plants): 1) gradient descent in the original parameters. you have asymptotic convergence, but not a guaranteed rate. 2) convex reparameterizations (e.g. from scherer). in here there is a rank condition is trivially satisfied, and therefore disappears, when dim(xc) >= dim(x) 3) Youla/SLS, which in the time-domain is a finite-time synthesis but can be used to recover the LDC to arbitrary precision (e.g. via Ho-Kalman). is that a reasonable summary? jack 11:36 AM I think this is reasonable :slightly_smiling_face: I would probably add the Riccati solutions of the famous DGKF paper https://authors.library.caltech.edu/3087/1/DOYieeetac89.pdf (if you wanted “completeness”) gradient descent in original parameters; no suboptimal stationary points for minimal controllers finite dimensional convex reparametrizations from Scherer, as you wrote DGKF solution: solve two Riccati equations Disturbance feedback: Youla/Q/SLS, etc. Synthesis problem is convex, but the decision variable(s) are infinite dimensional, but stable, transfer functions. Various approximations can be used (most common these days, for academic examples and learning theorists, is FIR). 11:37 I would be a bit cautious about recovering the LDC with Ho-Kalman. Nik Matni and James Anderson had a paper https://ieeexplore.ieee.org/abstract/document/8262844 about recovering state-space controllers from FIRs, in the SLS setting. According to James, I don’t think they meant it as a serious paper (i.e. something that you would ever actually implement), just an investigation. I suppose I should re-read that paper, but there was no dimensionality reduction going on, when recovering a state-space representation of the FIR

Disturbance-based feedback

State-space models. ARX Models.

Here is a simple extension of the LQR with least-squares derivation... (it's a work in progress!)

Given the state space equations: \begin{gather*} \bx[n+1] = \bA\bx[n] + \bB\bu[n] + \bw[n],\end{gather*} Consider parametrizing an output feedback policy of the form $$\bu[n] = \bK_0[n] \bx_0 + \sum_{i=1}^{n-1}\bK_i[n]\bw[n-i],$$ then the closed-loop state is convex in the control parameters, $\bK$: \begin{align*}\bx[n] =& \left( {\bf A}^n + \sum_{i=0}^{n-1}{\bf A}^{n-i-1}{\bf B}{\bf K}_0[i] \right) \bx_0 + \sum_{j=0}^{n-1} \sum_{i=0}^{n-1}{\bf A}^{n-i-1}{\bf B}{\bf K}_{j}[i] \bw[i-j],\end{align*} and therefore objectives that are convex in $\bx$ and $\bu$ (like LQR) are also convex in $\bK$. Moreover, we can calculate $\bw[n]$ by the time that it is needed given our observations of $\bx[n+1], \bx[n],$ and knowledge of $\bu[n].$

We can extend this to output disturbance-based feedback: \begin{gather*} \bx[n+1] = \bA\bx[n] + \bB\bu[n] + \bw[n],\\ \by[n] = \bC\bx[n] + \bv[n], \end{gather*} by parametrizing an output feedback policy of the form $$\bu[n] = \bK_0[n] \by[0] + \sum_{i=1}^{n-1}\bK_i[n]{\bf e}[n-i],$$ where ${\bf e}[n] = ...$ Sadraddini20.

System-Level Synthesis

Task-relevant variables / learning state representations

Feedback from Pixels

Should I even mention teacher-student (as used by Marco Hutter/Pulkit)? Put it in context?

References

  1. Joao P. Hespanha, "Linear Systems Theory", Princeton Press , 2009.

  2. Karl Johan {\AA}str{\"o}m and Richard M Murray, "Feedback systems: an introduction for scientists and engineers", Princeton university press , 2010.

  3. Vincent Blondel and John N Tsitsiklis, "{NP}-hardness of some linear control design problems", SIAM journal on control and optimization, vol. 35, no. 6, pp. 2118--2127, 1997.

  4. Sadra Sadraddini and Russ Tedrake, "Robust Output Feedback Control with Guaranteed Constraint Satisfaction", In the Proceedings of 23rd ACM International Conference on Hybrid Systems: Computation and Control , pp. 12, April, 2020. [ link ]

Previous Chapter Table of contents Next Chapter