Researchers from Sony AI and the King Abdullah University of Science and Technology (KAUST) have developed FedP3, a solution aimed at addressing the challenges of model heterogeneity in federated learning (FL). Model heterogeneity arises when devices used in FL have different capabilities and data distributions. FedP3, which stands for Federated Personalized and Privacy-friendly network Pruning, is a framework that offers customized models for each device, taking into account each device’s memory storage, processing capabilities, and network bandwidth.
FL involves training a global model using data stored on multiple devices, ensuring privacy of the data. However, existing FL methods typically use one shared model for all devices without considering their unique capabilities, increasing the complexity of FL implementations.
The FedP3 model aims to rectify this by personalizing models and using pruning techniques, which reduce the model’s size to make it more manageable. The framework includes two types of pruning: global pruning, which shrinks the model’s size on the server-side, and local pruning, which refines the model on each device based on its capabilities.
Additionally, FedP3 includes privacy protections to ensure sensitive data stays safe during the FL process. It accomplishes this by limiting the data shared with the server and introducing controlled noise along with model updates to provide additional privacy. The team showcased a variant of this, known as DP-FedP3, in their research.
The researchers conducted numerous experimental studies to evaluate the performance of FedP3, utilizing benchmark datasets such as CIFAR10/100 and FashionMNIST. Their findings showed that FedP3 significantly reduces communication costs but performs comparably to standard FL methods. FedP3 also proved useful for larger-scale models, such as ResNet18, in heterogeneous FL environments.
In conclusion, by integrating personalized models, dual pruning strategies and privacy protections, FedP3 provides an efficient and secure FL implementation, successfully addressing the challenges of model heterogeneity in FL. The comprehensive solution significantly lowers communication costs and maintains comparable performance across different datasets and model architectures.