Author Information

Garrett SeylerFollow

individual

What campus are you from?

Daytona Beach

Authors' Class Standing

Garrett Seyler, Senior

Lead Presenter's Name

Garrett Seyler

Faculty Mentor Name

Cagri Kilic

Abstract

This project presents a vision-based perception pipeline developed in the Space Robotics and Generative Estimation (SRGE) Lab to enable real-time sensing and classification for terrestrial robotic platforms. The system integrates a YOLOv11 object detection model with an Intel RealSense D455 depth camera to identify and estimate the range of aerial objects from live image streams. Trained on a dataset of over 2500 images, the model outputs spatially correlated detection and depth information to provide both semantic and geometric awareness. The primary contribution of this work is the development of a modular, vision-only framework that fuses deep learning inference with stereo depth sensing to achieve spatial understanding using compact, low-power hardware. This approach demonstrates a practical alternative to LiDAR-based perception systems, offering a scalable method for future research in autonomous tracking, swarm coordination, counter UAS, and even in space debris collection.

Did this research project receive funding support from the Office of Undergraduate Research.

No

Share

COinS
 

Fusion of Deep Learning and Stereo Depth Sensing for Real-Time Aerial Robotic Perception

This project presents a vision-based perception pipeline developed in the Space Robotics and Generative Estimation (SRGE) Lab to enable real-time sensing and classification for terrestrial robotic platforms. The system integrates a YOLOv11 object detection model with an Intel RealSense D455 depth camera to identify and estimate the range of aerial objects from live image streams. Trained on a dataset of over 2500 images, the model outputs spatially correlated detection and depth information to provide both semantic and geometric awareness. The primary contribution of this work is the development of a modular, vision-only framework that fuses deep learning inference with stereo depth sensing to achieve spatial understanding using compact, low-power hardware. This approach demonstrates a practical alternative to LiDAR-based perception systems, offering a scalable method for future research in autonomous tracking, swarm coordination, counter UAS, and even in space debris collection.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.