ORCID Number

0000-0002-3645-8481

Date of Award

Summer 8-13-2025

Access Type

Dissertation - Open Access

Degree Name

Doctor of Philosophy in Aerospace Engineering

Department

Aerospace Engineering

Committee Chair

Troy Henderson

Committee Chair Email

hendert5@erau.edu

First Committee Member

Richard Prazenica

First Committee Member Email

prazenir@erau.edu

Second Committee Member

Hao Peng

Second Committee Member Email

PENGH2@erau.edu

Third Committee Member

M. Ilhan Akbas

Third Committee Member Email

AKBASM@erau.edu

Fourth Committee Member

Reza Karimi

Fourth Committee Member Email

reza.r.karimi@jpl.nasa.gov

College Dean

James W. Gregory

Abstract

This dissertation investigates the application of reinforcement learning (RL) to the design and optimization of low-thrust spacecraft trajectories, with an emphasis on autonomy, adaptability, and robustness in the presence of system uncertainties and unmodeled perturbations. Classical approaches to low-thrust trajectory design are predominantly grounded in optimal control theory, which relies on the availability of precise dynamical models and often requires problem-specific reformulation and solver tuning. While optimal control methods offer high accuracy under deterministic conditions, their sensitivity to stochastic disturbances and computational limitations in highly nonlinear or uncertain environments pose significant challenges for future autonomous space missions.

To address these challenges, this research proposes a reinforcement learning framework in which spacecraft learn to generate continuous low-thrust control actions through direct interaction with the environment. The spacecraft dynamics are formulated using the two-body problem equations of motion and Gauss’ variational equations expressed in modified equinoctial elements, allowing for efficient handling of low-thrust propulsion and long-duration transfers. Several RL algorithms, including Proximal Policy Optimization and Soft Actor-Critic, are implemented and evaluated across a diverse set of trajectory design problems: orbit-raising maneuvers, inclination change maneuvers, combined orbital maneuvers, and an asteroid rendezvous mission targeting near-Earth asteroid Apophis.

The RL agents are trained under both deterministic and stochastic conditions, with stochastic perturbations modeled as zero-mean Gaussian white noise accelerations to simulate realistic environmental disturbances such as solar radiation pressure, navigation errors, and control noise. The performance of the RL-generated solutions is rigorously assessed through direct comparison with classical optimal control results obtained using pseudospectral methods, as well as through extensive Monte Carlo simulations that quantify robustness and terminal accuracy under uncertainty. Across all case studies, the RL policies successfully generate feasible low-thrust trajectories that achieve the target orbital conditions, demonstrating resilience to perturbations and generalization to new initial conditions.

The RL-based solutions offer a distinct advantage in terms of flexibility and adaptability without requiring explicit knowledge of the perturbation models. These results suggest that reinforcement learning can serve as an effective complementary tool for trajectory design, particularly in scenarios where ground communication is limited or system uncertainties preclude the use of fully deterministic guidance strategies. Potential applications include autonomous fault recovery, initial transfer design, and support for onboard decision-making during deep space operations.

This dissertation contributes to the growing body of research on machine learning for space applications by systematically evaluating the strengths and limitations of RL-based guidance in low-thrust trajectory optimization, highlighting the feasibility of deploying such methods in future autonomous missions. The findings provide insight into reward function design, algorithm selection, and robustness assessment, laying the groundwork for continued exploration of learning-based methods in astrodynamics.

GS9_RCR_PhD.pdf (503 kB)

Share

COinS