Health-monitoring apps that use machine learning can be helpful in managing chronic diseases and fitness goals; however, they can also be slow and use a lot of energy. This is mainly due to machine learning models being shuttled between a smartphone and a central memory server. While machine-learning accelerators are often used to streamline computations, they are vulnerable to attacks, leading to stolen sensitive information.
Researchers from MIT and the MIT-IBM Watson AI Lab have developed a solution to this problem. They have created a machine-learning accelerator that is resistant to the most common types of attacks. The new chip can ensure the privacy of sensitive data like health records, financial information, among others, while also ensuring that AI models run efficiently on devices.
The research team made several optimizations to maintain strong security without significantly slowing down the device. Moreover, the added security measures do not affect the accuracy of computations. This innovation could be useful for demanding AI applications such as augmented and virtual reality or autonomous driving.
The implementation of the chip may make the device more expensive and less energy-efficient, but Maitreyi Ashok, an electrical engineering and computer science graduate student at MIT, believes that it is a small price to pay for security. She emphasized that it is important to prioritize security in the design stage and an afterthought security system can be prohibitively expensive.
The researchers designed their machine learning accelerator to avoid both side-channel and bus-probing attacks, two common types of hacks. They split data into random pieces and prevented bus-probing attacks through a lightweight cipher that encrypts the stored model in off-chip memory. Additionally, they generated the key that decrypts the cipher directly on the chip to avoid back and forth movement.
The team tested the chip’s security vulnerabilities by taking on the role of hackers. Despite millions of attempts, they could not reconstruct any real information or extract pieces of the model or data set.
Moving forward, the team will look at ways to reduce the energy consumption and size of the chip to make it easier to implement at scale. The research is funded, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship.