Privacy-Preserving Communication Efficient Approach for Health Data in Distributed Machine Learning
DOI:
https://doi.org/10.70135/seejph.vi.3617Abstract
In distributed machine learning systems, communication challenges and privacy concerns arise while transmitting model parameters between nodes. Most of the current solutions have focused on resolving communication-related problems but often fall in effectively safeguarding privacy. Many current distributed machine learning methods primarily concentrate on resolving privacy concerns, sometimes overlooking the vital aspect of safeguarding the privacy of features. These solutions may not sufficiently safeguard the precise attributes of data points used in model training. To address this issue, proposed an Ensemble of Feature Reduction Model (EFRM), which is a pre-processing feature privacy communication method has been implemented. This technique is mainly meant to tackle the concerns related to feature privacy and communication efficiency in distributed machine learning. The goal of this method is to minimize communication inside a node by guaranteeing privacy via data pre-processing to communication between nodes. The experimental findings are evaluated on the Heart Stat log and WDBC datasets using classification metrics such as accuracy and F1 Score. Additionally, the impact on model training and prediction time is addressed.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.