Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have designed a machine-learning accelerator that can improve the security of health-monitoring apps. These applications can be slow and inefficient due to the large machine-learning models that need to be transferred between a smartphone and a central memory server. Instead, the team developed a chip that boosts the speed of computation while protecting sensitive data from common types of attacks.

These machine-learning accelerators can stream computations faster by eliminating the need for significant data transfers, but they also have a weakness: they are vulnerable to hackers looking to steal sensitive information. MIT’s chip provides strong security without largely affecting the device’s speed, making it a beneficial addition to advanced AI applications such as virtual reality, augmented reality, or autonomous driving. Adding this chip might make the device slightly more expensive and less energy-efficient, but according to lead author Maitreyi Ashok, an EECS graduate student at MIT, it is a fair price to pay for enhanced security.

The team focused on a type of machine-learning accelerator known as digital in-memory compute (IMC), which performs computations inside a device’s memory, where pieces of a machine-learning model are stored after being transferred from a central server. However, IMCs are vulnerable to side-channel attacks, where a hacker tracks the chip’s power usage to reverse-engineer data, and bus-probing attacks, where bits of the model and dataset can be stolen by probing the communication between the accelerator and the off-chip memory.

The team employed a three-pronged approach to secure against these types of attacks. First, they split data in the IMC into random pieces to prevent side-channel attacks; and second, prevented bus-probing attacks using a lightweight cipher to encrypt the model stored in off-chip memory. Lastly, to further enhance security, they generated the key decrypting the cipher directly on the chip, rather than transferring it back and forth along with the model.

During their safety testing, the researchers experimented with side-channel and bus-probing attacks on their chip but failed to extract any meaningful information. They admit that the implementation of these measures does reduce the energy efficiency of the accelerator and would likely make it more expensive due to the larger chip size. However, future work will aim to explore ways of reducing these costs and energy usage, potentially making the chip easier to deploy and less expensive. The research was partly funded by the MIT-IBM Watson AI Lab, the National Science Foundation, and the Mathworks Engineering Fellowship.

Leave a comment

0.0/5