Skip to content Skip to footer

Introducing SecFormer: A Machine Learning Optimization Framework Aimed at Balancing Privacy and Efficiency in Large Language Models

Excitement abounds as innovations in the field of artificial intelligence (AI) continue to unlock powerful capabilities in large language models. Recent research into the Model-as-a-Service (MaaS) paradigm, however, has raised privacy concerns, particularly when handling sensitive data. To address this challenge, researchers have developed Secure Multi-Party Computing (SMPC), a solution for preserving the privacy of both inference data and model parameters. Yet, applying SMPC to Privacy-Preserving Inference (PPI) for large language models, particularly those based on the Transformer architecture, often results in significant performance issues.

Fortunately, a team of researchers has introduced an advanced optimization framework called SecFormer that promises to achieve an optimal balance between performance and efficiency in PPI for Transformer models. This innovative framework replaces high-overhead operations with SMPC-friendly alternatives, such as substituting Softmax with a combination of multiplication and division operations. Knowledge distillation further refines the Transformer model, making it compatible with SMPC. Additionally, the team has developed a privacy-preserving GeLU algorithm based on segmented polynomials and efficient privacy-preserving algorithms for LayerNorm and Softmax.

Evaluation of the GLUE (shown in Figure 1 and Table 2) benchmark dataset using Transformer models like BERTBASE and BERTLARGE demonstrates that SecFormer outperforms state-of-the-art approaches in terms of performance and efficiency. With an average improvement of 5.6% and 24.2%, SecFormer balances performance and efficiency in PPI. Comparisons with existing frameworks based on model design and SMPC protocol optimizations reveal that SecFormer achieves a speedup of 3.4 and 3.2 times in PPI while maintaining comparable performance levels.

In summary, SecFormer presents an exciting, scalable, and effective solution that is sure to enhance large language models while ensuring stringent privacy standards in complex linguistic landscapes. All credit for this research goes to the researchers of this project. Join the 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group to learn more, and don’t forget to follow us on Twitter and subscribe to our newsletter to stay up-to-date on all the latest AI news and developments.

Leave a comment

0.0/5