Health-monitoring apps that assist people in managing chronic diseases or tracking fitness goals work with the help of large machine-learning models, which are often shuttled between a user’s smartphone and a central memory server. This process can slow down the app’s performance and drain the energy of the device. While machine-learning accelerators can help to speed up this process, they are vulnerable to attacks where sensitive information can be stolen.
To enhance security, researchers from MIT and the MIT-IBM Watson AI Lab have created a machine-learning accelerator that can resist the two most common types of attacks. This chip can maintain the privacy of a user’s health records, financial information, and other crucial data, while allowing the machine-learning models to run efficiently on devices. Despite a slight slowdown in the device’s operation, the accelerator provides improved privacy without compromising the accuracy of computations. This technology could especially benefit demanding AI applications such as augmented reality (AR), virtual reality (VR), and autonomous driving.
The research team has accomplished this feat by multiple optimizations. However, this comes at the cost of a slight increase in the device’s cost and reduced energy efficiency. Maitreyi Ashok, the study’s lead author, argues that the additional security justifies these drawbacks, emphasizing the importance of building systems with security as a fundamental element.
The researchers tackled a type of accelerator known as digital in-memory computing (IMC). While digital IMC chips speed up the processing by conducting millions of operations simultaneously, this complexity makes it difficult to prevent security attacks. To tackle this problem, the team implemented a three-pronged approach. Firstly, they divided the data into random pieces to deter side-channel attacks. Secondly, they prevented bus-probing attacks by using a lightweight cipher to encrypt the model stored in off-chip memory. Lastly, they improved the system’s security by creating a unique key for decryption directly on the chip, utilizing random variations in the chip produced during manufacturing.
The research team tested their chip’s effectiveness by attempting to extract secret information through side-channel and bus-probing attacks. Their efforts yielded no real information even after millions of trials whereas only about 5,000 samples were needed to access information from an unprotected chip. However, the added security came with two costs: a decrease in energy efficiency and an increased chip size, leading to a higher fabrication cost. The team is contemplating ways to reduce their chip’s energy consumption and size to make its implementation more convenient and affordable in the future. This research was funded in part by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship.