•  
  •  
 
Journal of Aviation/Aerospace Education & Research

Volume

33

Issue

4

Key words

unmanned aerial vehicle, UAV, safety distance, reinforcement learning, deep deterministic policy gradient, flight safety, airspace efficiency, UAV fleet operation

Abstract

In the field of aviation, safety is a critical cornerstone, and the operation of Unmanned Aerial Vehicle (UAV) systems is deeply connected with this principle. A thorough analysis and rigorous simulation and testing of aircraft systems are essential to avoid severe safety hazards. This paper delves into the safety issue in UAV operations, specifically regarding maintaining minimum safety distances under fluctuating wind conditions. The study introduces a novel solution based on a Deep Deterministic Policy Gradient (DDPG) model, a reinforcement learning method. The DDPG model was trained using a simulated environment created through the Gazebo simulator, with values for wind and gust conditions derived from historical records at the KLAF airport at Purdue University. The model's performance was evaluated regarding maintaining safe distances under these conditions. The results indicate that the DDPG model can accurately predict safety distances with relatively low error rates when predicting under different weather conditions. The findings significantly contribute to UAV safety operations, suggesting the potential future utilization of reinforcement learning methods to study enhancing airspace efficiency and obstruction avoidance in UAVs.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.