Date of Award

Spring 2022

Document Type

Thesis - Open Access

Degree Name

Doctor of Philosophy in Aerospace Engineering

Department

Aerospace Engineering

Committee Chair

Dr. Richard Prazenica

First Committee Member

Dr. Troy Henderson

Second Committee Member

Dr. K. Merve Dogan

Third Committee Member

Dr. Morad Nazari

Fourth Committee Member

Dr. Sergey Drakunov

Abstract

In this work, the model predictive control problem is extended to include not only open-loop control sequences but also state-feedback control laws by directly optimizing parameters of a control policy. Additionally, continuous cost functions are developed to allow training of the control policy in making discrete decisions, which is typically done with model-free learning algorithms. This general control policy encompasses a wide class of functions and allows the optimization to occur both online and offline while adding robustness to unmodelled dynamics and outside disturbances. General formulations regarding nonlinear discrete-time dynamics and abstract cost functions are formed for both deterministic and stochastic problems. Analytical solutions are derived for linear cases and compared to existing theory, such as the classical linear quadratic regulator. It is shown that, given some assumptions hold, there exists a finite horizon in which a constant linear state-feedback control law will stabilize a nonlinear system around the origin. Several control policy architectures are used to regulate the cart-pole system in deterministic and stochastic settings, and neural network-based policies are trained to analyze and intercept bodies following stochastic projectile motion.

Share

COinS