direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Logo der TU Berlin

Inhalt des Dokuments

Iterative learning control with a selective learning strategy for swinging up a pendulum

From Fachgebiet Regelungssysteme TU Berlin

Jump to: navigation, search


Abstract

Swinging up a pendulum on a cart is a well-known control task that is typically solved by implementing realtime feedback control. This approach fails if the pendulum angle is not available in real time, e.g. due to large or variable measurement delays. However, the task can be solved in multiple trials by applying feedforward inputs that are improved from trial to trial by Iterative Learning Control (ILC). This project demonstrates that an ILC can be used for trajectory tracking close to the singularities and the unstable equilibrium of a non-linear system. For the specific application, it turns out that the convergence of our ILC is at least two orders of magnitude faster than the one achieved by other methods which avoid feedback and do not rely on a suitable initial input trajectory.

People involved

Description

Fig. 1: Sketch of the hardware setup consisting of a pendulum attached to a cart that can move horizontally on a rail.

This project is about the design of an ILC for pendulum swing-up by angle trajectory tracking. The system (Fig. 1) consists of a horizontal rail of finite length with a one-dimensionally movable cart thereon. A pendulum rod is rotatably mounted on the cart. The aim is to transfer the rod from its downward position close to its upright position by moving the cart on the rail. We propose an ILC for trajectory tracking that requires neither a good initial input nor computationally expensive calculations. Nevertheless, the pendulum rod should pass the singular points of the system and reach its unstable upper equlibrium within a small number of iterations. We refrain from using realtime feedback control of the pendulum angle and aim at good robustness against parameter uncertainties.

During each iteration, feedforward control is applied. Subsequently, an optimized input trajectory for the following iteration is determined from the recorded measurement values. The controller design is based on a modified plant inversion approach which requires liniarization along input and output trajectory. For a linear system and under ideal circumstances, the inversion-based approach leads to one-step convergence. During the second iteration, the desired trajectory should be reached, regardless of the initial input sequence. However, in the present case (a non-linear system), there are two major limitations:

Firstly, when the pendulum rod is close to horizontal, the input (cart velocity) hardly influences the output (pendulum angle) of the system. Since large changes in the cart velocity lead to very small changes in the pendulum angle, the inversion-based approach reacts to small tracking errors with very large input signal changes. To avoid this problem, we prevent learning from these trajectory segments.

Secondly, if the tracking error is large, the linearization is no longer assured to be a good approximation of the true system dynamics. Learning from large tracking errors might therefore result in further increase of these tracking errors and to unnecessarily large cart velocities and positions. Therefore, we restrict the learning process to learn only from samples with small tracking error.

Fig. 2: Input trajectories (top) and desired and measured output trajectories (bottom) during iterations one to six.

The proposed ILC is evaluated in an experimental testbed. A desired output trajectory is chosen that leads to swing-up in about seven seconds. The initial input signal is constantly zero, i.e. the ILC must learn the entire swing-up from scratch. Fig. 2 shows input and output signals for the first six trials. It is obvious to see that the first input update used only the first 2.5 s of the measured error, while the second update was based on the first 3 s, and so on. After five learning steps, the measured curve of the angle y is close to the desired for all times. The pendulum swing-up is successful. The learning process is visualized in an animation (Fig. 3), too.

Fig. 3: Animation of measurement data from experiments on a real pendulum system: measured angle output (red) in comparison to the reference (gray), as well as the root-mean-square deviation between both, during iterations one to eight. Click on figure to view video!

To investigate the robustness of the algorithm, the same experimental testbed is used again, but the algorithm is parameterized with multiple parameter sets that differ from the true values. Although the proposed approach is based on plant inversion, it turns out that it is not sensitive to parameter uncertainties. All model parameters may vary by at least a factor 2 (i.e. between 50 % and 200 %) from the true value, except for the length of the pendulum, which should be between 50 and 110 % of the true value.

Related Publications

J. Beuchert, J. Raisch, T. Seel. Design of an iterative learning control with a selective learning strategy for swinging up a pendulum (accepted). In European Control Conference (ECC), 2018.

T. Seel, T. Schauer, J. Raisch. Monotonic Convergence of Iterative Learning Control with Variable Pass Length. International Journal of Control, 90 (3):409–422, 2017.
M. Guth, T. Seel, J. Raisch. Iterative Learning Control with Variable Pass Length Applied to Trajectory Tracking on a Crane with Output Constraints. In Proceedings of the 52nd IEEE Conference on Decision and Control, pages 6676–6681, Firenze, Italy, December 2013.
T. Seel, T. Schauer, J. Raisch. Iterative Learning Control for Variable Pass Length Systems. In Proceedings of the 18th IFAC World Congress, pages 4880–85, Milan, Italy, 2011.

Recommend this page