Get Your Foes Fooled: Proximal Gradient Split Learning for Defense Against Model Inversion Attacks on IoMT Data

KHOWAJA, SA, LEE, IH, DEV, K, JARWAR, Muhammad Aslam and QURESHI, NMF (2022). Get Your Foes Fooled: Proximal Gradient Split Learning for Defense Against Model Inversion Attacks on IoMT Data. IEEE Transactions on Network Science and Engineering. [Article]

Documents
30679:607783
[thumbnail of Get_Your_Foes_Fooled_Proximal_Gradient_Split_Learning_for_Defense_Against_Model_Inversion_Attacks_on_IoMT_Data.pdf]
Preview
PDF
Get_Your_Foes_Fooled_Proximal_Gradient_Split_Learning_for_Defense_Against_Model_Inversion_Attacks_on_IoMT_Data.pdf - Accepted Version
Available under License All rights reserved.

Download (2MB) | Preview
Abstract
The past decade has seen a rapid adoption of Artificial Intelligence (AI), specifically the deep learning networks, in Internet of Medical Things (IoMT) ecosystem. However, it has been shown recently that the deep learning networks can be exploited by adversarial attacks that not only make IoMT vulnerable to the data theft but also to the manipulation of medical diagnosis. The existing studies consider adding noise to the raw IoMT data or model parameters which not only reduces the overall performance concerning medical inferences but also is ineffective to the likes of deep leakage from gradients method. In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks. The proposed method intentionally attacks the IoMT data when undergoing the deep neural network training process at client side. We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance. Extensive analysis show that the PGSL not only provides effective defense mechanism against the model inversion attacks but also helps in improving the recognition performance on publicly available datasets. We report 14.0 % , 17.9 % , and 36.9 % gains in accuracy over reconstructed and adversarial attacked images, respectively.
More Information
Statistics

Downloads

Downloads per month over past year

View more statistics

Metrics

Altmetric Badge

Dimensions Badge

Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Actions (login required)

View Item View Item