Is this project an undergraduate, graduate, or faculty project?
Graduate
group
What campus are you from?
Daytona Beach
Authors' Class Standing
Muhammad Najjar, Graduate Student Dumindu Samaraweera, Faculty
Lead Presenter's Name
Muhammad Najjar
Faculty Mentor Name
Dumindu Samaraweera
Abstract
Federated Learning (FL) is a machine learning approach that enables distributed or edge devices to collaboratively train a model without sharing their local data, preserving privacy. FL has been widely applied in various fields, particularly in autonomous and connected vehicles, due to its privacy-preserving nature. This is achieved by aggregating local gradient updates from individual devices (clients) without sharing the raw sensor data with the server. However, despite its privacy guarantees, recent research has identified a critical vulnerability in FL: protecting model updates during communication. To address this, Homomorphic Encryption (HE) has been proposed as a solution to secure model updates during communication and aggregation. While HE enhances the security of FL, it is computationally intensive and introduces significant overhead, making it impractical for many real-world applications, including connected and autonomous vehicles. This study seeks to explore the feasibility of integrating HE within FL systems in practical settings. Our experiments focus on identifying the challenges of integrating HE with FL, specifically evaluating the trade-offs in terms of computation time, communication cost, and accuracy across different encryption parameter configurations. By testing HE across a range of machine learning models, including large and complex deep learning architectures, we aim to quantify the encryption overhead and assess its impact on model performance. The goal is to determine the practical feasibility and usability of HE in real-world applications, even in scenarios involving computationally intensive models.
Did this research project receive funding support from the Office of Undergraduate Research.
No
Implementing Homomorphic Encryption in Federated Learning Architectures: Challenges and Way Forward
Federated Learning (FL) is a machine learning approach that enables distributed or edge devices to collaboratively train a model without sharing their local data, preserving privacy. FL has been widely applied in various fields, particularly in autonomous and connected vehicles, due to its privacy-preserving nature. This is achieved by aggregating local gradient updates from individual devices (clients) without sharing the raw sensor data with the server. However, despite its privacy guarantees, recent research has identified a critical vulnerability in FL: protecting model updates during communication. To address this, Homomorphic Encryption (HE) has been proposed as a solution to secure model updates during communication and aggregation. While HE enhances the security of FL, it is computationally intensive and introduces significant overhead, making it impractical for many real-world applications, including connected and autonomous vehicles. This study seeks to explore the feasibility of integrating HE within FL systems in practical settings. Our experiments focus on identifying the challenges of integrating HE with FL, specifically evaluating the trade-offs in terms of computation time, communication cost, and accuracy across different encryption parameter configurations. By testing HE across a range of machine learning models, including large and complex deep learning architectures, we aim to quantify the encryption overhead and assess its impact on model performance. The goal is to determine the practical feasibility and usability of HE in real-world applications, even in scenarios involving computationally intensive models.