Is this project an undergraduate, graduate, or faculty project?
Undergraduate
group
What campus are you from?
Daytona Beach
Authors' Class Standing
Junior
Lead Presenter's Name
Sirio Jansen-Sanchez
Faculty Mentor Name
Leo Ghelarducci
Abstract
This research aims to develop a semi-supervised model leveraging contrastive learning for remote sensing, with a primary focus on sonar data processing and potential adaptation to radar systems. Remote sensing technologies like sonar and radar rely on the detection of objects and environments using reflected signals—sonar with sound waves for underwater mapping and radar with electromagnetic waves for atmospheric or terrestrial detection. The focus of this work is on contrastive learning, which enables the model to differentiate between objects detected in sonar scans, such as buoys or gates, by learning distinct representations for each object. The dataset comprises sonar scans representing a 400-gradian environment, capturing intensity readings at various distances, along with timestamp and angle data. These sonar scans are processed sequentially using Long Short-Term Memory (LSTM) layers, which capture temporal patterns while compressing and denoising the data, thus reducing computational load while improving object detection accuracy. Additional features like normalized scan data, angle differences, and time shifts are incorporated to enhance the model’s performance. Although the research is currently centered on sonar, the contrastive learning framework, alongside the deep learning techniques employed, is highly applicable to radar systems. Both sonar and radar face similar challenges in signal processing and object detection. This research highlights how advancements in sonar data processing through contrastive learning and RNN Autoencoders offer a unified framework for enhanced object detection and environmental mapping across remote sensing technologies
Did this research project receive funding support from the Office of Undergraduate Research.
Yes, Student Internal Grant
Enhancing Active Detection Using Semi-Supervised Contrastive Learning in Remote Sensing
This research aims to develop a semi-supervised model leveraging contrastive learning for remote sensing, with a primary focus on sonar data processing and potential adaptation to radar systems. Remote sensing technologies like sonar and radar rely on the detection of objects and environments using reflected signals—sonar with sound waves for underwater mapping and radar with electromagnetic waves for atmospheric or terrestrial detection. The focus of this work is on contrastive learning, which enables the model to differentiate between objects detected in sonar scans, such as buoys or gates, by learning distinct representations for each object. The dataset comprises sonar scans representing a 400-gradian environment, capturing intensity readings at various distances, along with timestamp and angle data. These sonar scans are processed sequentially using Long Short-Term Memory (LSTM) layers, which capture temporal patterns while compressing and denoising the data, thus reducing computational load while improving object detection accuracy. Additional features like normalized scan data, angle differences, and time shifts are incorporated to enhance the model’s performance. Although the research is currently centered on sonar, the contrastive learning framework, alongside the deep learning techniques employed, is highly applicable to radar systems. Both sonar and radar face similar challenges in signal processing and object detection. This research highlights how advancements in sonar data processing through contrastive learning and RNN Autoencoders offer a unified framework for enhanced object detection and environmental mapping across remote sensing technologies