Skip to content Skip to footer

How Can Differential Privacy and Federated Learning Help Keep Your Data Secure? Examining Potential Vulnerabilities of Machine Learning Systems

With the advancement of technology, data has become increasingly more accessible to the information technology field. This has allowed for the creation of sophisticated Artificial Intelligence (AI) solutions, utilizing vast amounts of data. However, the user-level data production and collecting has also brought up significant privacy and security issues due to the granularity of accessible information. To combat this, federated learning has attracted increasing interest from the research community in the past few years. This learning paradigm distributes the computing and assigns each client to train a local model independently with a non-shareable private data set, making it possible to train deep learning models without having to gather potentially sensitive data centrally into a single computing unit.

Though this has been a great step forward in safeguarding user data, researchers from the University of Pavia, the University of Padua, and Radboud University & Delft University of Technology have discovered a major security flaw in machine learning systems. They found that while more socially collaborative solutions can aid in enhancing the functionality of systems under consideration as well as in developing robust privacy-preserving strategies, this paradigm can be maliciously abused to create extremely potent cyberattacks. This is due to the decentralized nature of federated learning, which makes it an appealing target environment for attackers.

To combat this, these researchers have proposed an innovative artificial intelligence-driven attack plan for a situation where a social recommendation system is outfitted with privacy safeguards. This attack plan contains two modes – a false rating injection method (Backdoor Mode) and an adversarial mode of convergence inhibition. Through the use of Mean Absolute Error, Root Mean Squared Error, and a recently developed metric called Favorable Case Rate, the team was able to evaluate the effectiveness of the attack. The results were quite astounding, showing that the attack could, on average, negatively impact the performance of the target GNN model by 60%, while in Backdoor Mode, it permits the creation of completely functional backdoors in roughly 93% of cases.

The research team intends to expand the research by modifying the suggested attack tactic to fit various potential scenarios to show the approach’s general applicability. Additionally, they plan to create potential upgrades to current defenses to address the found weakness, and to include vertical federated learning in their research.

The discovery of this major security flaw has been a game-changer for the federated learning and privacy-preserving community. With the help of this research, the community can work together to find solutions to the discovered flaws and to create even more robust privacy-preserving strategies. So don’t forget to check out the Paper and Github and join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, LinkedIn Group, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Leave a comment

0.0/5