Differentially Private Spiking Neural Networks: Enhancing Privacy and Robustness in Social Robotics

AITSAM, Muhammad, CHARDIWALL, Samiulhaq and DI NUOVO, Alessandro (2024). Differentially Private Spiking Neural Networks: Enhancing Privacy and Robustness in Social Robotics. In: Cybersecurity and Human Capabilities through Symbiotic Artificial Intelligence. Proceedings of the 16th International Conference on Global Security, Safety and Sustainability, November 2024. Advanced Sciences and Technologies for Security Applications . Springer. [Book Section]

Documents
34495:736997
[thumbnail of RRS]
PDF (RRS)
ICGS3_Aitsam (1).pdf - Accepted Version
Restricted to Repository staff only
Available under License Creative Commons Attribution.

Download (524kB)
Abstract
As social robots increasingly integrate into various sectors, concerns over privacy and security become paramount. Social robots, while capable of processing and interacting with sensitive user data, often face significant limitations in processing power and energy efficiency. Spiking Neural Networks (SNNs), inspired by biological neurons, provide a promising solution by offering efficient temporal processing with reduced energy consumption. However, similar to conventional Artificial Neural Networks (ANNs), SNNs are vulnerable to privacy attacks, such as model inversion and membership inference, which can expose sensitive training data. We proposes the use of Differential Privacy (DP) to safeguard user data in SNN models for social robots. We train two models on the MNIST dataset: a baseline SNN and a differentially private SNN (DP-SNN). We evaluate both models through a privacy attacks, demonstrating how differentially private SNNs mitigate data leakage. We assess the ability of DP-SNNs to withstand malicious inputs, showing that the noise introduced by differential privacy enhances robustness in addition to privacy preservation. Our results indicate that differentially private SNNs not only maintain strong privacy guarantees but also improve resilience against adversarial attacks, making them an ideal solution for social robots where both data security and processing efficiency are critical.
More Information
Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Actions (login required)

View Item View Item