Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a novel machine-learning accelerator that can protect sensitive data like health records from two common types of cybersecurity threats while efficiently running large AI models. This advancement could make an noticable impact on challenging AI applications, such as augmented and virtual reality, autonomous driving and health-monitoring mobile apps.

At present, machine-learning models powering health-monitoring apps usually have to shuffle between a smartphone and a central memory server. To speed this process and reduce the volume of data sharing, engineers have started using machine-learning accelerators. However, the catch is that they can be prone to cyber attacks, thereby compromising secret and critical information.

Recognizing the need for a balance between quicker computations and data safety, the team developed a machine-learning accelerator chip that mitigates these risks substantially. However, this addition might make the end device somewhat more expensive, and slightly less energy-efficient, but as per the researchers, this is a reasonable price to pay for the added level of security.

The research focuses on a type of machine learning accelerator called the digital in-memory compute (IMC). The IMC chip performs computations within a device’s memory where, to reduce back and forth data movements, chunks of a machine-learning model are stored after being moved from a central server. However, these IMC chips are vulnerable to hackers.

The team at the MIT and MIT-IBM Watson AI Lab has approached this security issue in three ways. They have developed a mechanism to split data in the IMC into multiple random parts, preventing a potential side-channel attack from deducing the original information. Also, they have used a lightweight cipher to stop bus-probing attacks, by encrypting the model stored in off-chip memory. Lastly, taking safety measures a step further, they produced a unique key to decrypt the cipher directly on the chip, making use of random variations introduced into the chip during its fabrication process.

The developed chip was tested against side-channel and bus-probing attacks, with successful outcomes: neither the original data nor pieces of the model or dataset could be reconstructed or extracted even after millions of attempts. Leave alone breaching the cipher. It’s worth noting though, that introducing these security measures did end up reducing the energy efficiency of the accelerator and increasing chip size, thereby increasing its cost of production.

With this research, the researchers have highlighted an essential aspect of edge device design – ensuring secure operation. It is a significant step in the development of safe, large scale AI models that can potentially be incorporated into a variety of use cases in the future. The researchers will continue to explore methods to make the device less energy-consuming and smaller in size, which might involve some trade-offs. But, as security becomes progressively important in an ever-connected world, it seems that’s a path worth treading.

Leave a comment

0.0/5