Date of Award

Fall 12-8-2022

Access Type

Thesis - Open Access

Degree Name

Master of Science in Aerospace Engineering

Department

Aerospace Engineering

Committee Chair

Vladimir Golubev

First Committee Member

Snorri Gudmundsson

Second Committee Member

Richard Prazenica

Third Committee Member

William MacKunis

College Dean

James W. Gregory

Abstract

Dynamic soaring (DS) is a bio-inspired flight maneuver in which energy can be gained by flying through regions of vertical wind gradient such as the wind shear layer. With reinforcement learning (RL), a fixed wing unmanned aerial vehicle (UAV) can be trained to perform DS maneuvers optimally for a variety of wind shear conditions. To accomplish this task, a 6-degreesof- freedom (6DoF) flight simulation environment in MATLAB and Simulink has been developed which is based upon an off-the-shelf unmanned aerobatic glider. A combination of high-fidelity Reynolds-Averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) in ANSYS Fluent and low-fidelity vortex lattice (VLM) method in Surfaces was employed to build a complete aerodynamic model of the UAV. Deep deterministic policy gradient (DDPG), an actor-critic RL algorithm, was used to train a closed-loop Path Following (PF) agent and an Unguided Energy- Seeking (UES) agent. Several generations of the PF agent were presented, with the final generation capable of controlling the climb and turn rate of the UAV to follow a closed-loop waypoint path with variable altitude. This must be paired with a waypoint optimizing agent to perform loitering DS. The UES agent was designed to perform traveling DS in a fixed wind shear condition. It was proven to extract energy from the wind shear to extend flight time during training but did not accomplish sustainable dynamic soaring. Further RL training is required for both agents. Recommendations on how to deploy an RL agent on a physical UAV are discussed.

Share

COinS