ORCID Number

0009-0001-6152-8613

Date of Award

Spring 2026

Access Type

Thesis - Open Access

Degree Name

Master of Science in Data Science

Department

Mathematics

Committee Chair

Sirani M. Perera

Committee Chair Email

pereras2@erau.edu

Committee Advisor

Sirani M. Perera

Committee Advisor Email

pereras2@erau.edu

First Committee Member

Berker Pekoz

First Committee Member Email

pekozb@erau.edu

Second Committee Member

Xianqi Li

Second Committee Member Email

xli@fit.edu

College Dean

Jayath Raghavan

Abstract

Conventional neural networks face significant challenges due to high computational costs, large parameter counts, and reliance on backpropagation, which restricts their application in resource-constrained and real-time settings. To address these challenges, this thesis proposes three structured neural network (NN) architectures grounded in the theories of sparse and self-contained factorizations of transforms, with applications to image compression, reconstruction, classification, encryption, and also adaptive wideband multi-beam beamforming. The first neural network architecture, named DCTrix-Net, replaces conventional spatial con- volution with highly sparse factorization of the discrete Cosine transform (DCT) complemented by Toeplitz-structured weight initialization, achieving at least 97% FLOP reduction over CNNs, R-CNN, Faster R-CNN, DCT-Net, and YOLOv11s, and 50% parameter reduction when compared to CNNs, R-CNN, and Faster R-CNN, alongside the lowest inference time of order 10−4 seconds on the DigiFace1M dataset, compared among all benchmark NNs. The second neural network architecture, referred to as DSTrans-Net, performs image classification and encryption entirely in the frequency domain via learned pointwise discrete Sine transform (DST) filters based on the highly sparse factorization of the DST. This network achieves the lowest FLOPs at 0.02399 G and the least number of parameters at 6.75M com- pared to other benchmarked networks, including CNN, R-CNN, Faster R-CNN, EfficientNet- B0, and ViT-B/16 on the STL-10 dataset at 128 × 128 resolution, marking a reduction of at least 90% in FLOPs relative to all benchmark NNs and at least 62% in comparison with CNN, R-CNN, Faster R-CNN, and EfficientNet-B0. The third neural network, called the Forward-Forward Beamformer (FFB) algorithm, is designed to learn adaptive True-Time- Delay (TTD) wideband multi-beam beamformers. This network utilizes Delay Vandermonde Matrix (DVM)-based weight matrices, eliminating the need for backpropagation and achieving training times 99% faster than conventional neural networks across 7–15 GHz range, with antenna array elements of 8, 16, and 32, having an accuracy of beamformed signals in the order of 10−2. In all three NN architectures, structured transform-based weight and sub-weight matrices unequivocally present a principled pathway in realizing low-complexity, lightweight structured neural network architectures tailored for applications in image and signal processing.

Share

COinS