Skip to content Skip to footer

Smartphone health-monitoring apps can be invaluable for managing chronic diseases or setting fitness goals. However, these applications often suffer from slowdowns and energy inefficiencies due to the large machine-learning models they use. These models are frequently swapped between a smartphone and a central memory server, hampering performance.

One solution engineers have pursued is the use of hardware to reduce the need for such intensive data transfer. Machine-learning accelerators can streamline computations, but can also be vulnerable to attacks aiming to steal sensitive information.

To address these security risk, researchers from MIT and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator that can resist two common types of attacks. The chip is designed to protect sensitive user data, such as health records and financial information, while still allowing AI models to run efficiently on devices. Despite slowing down devices somewhat and adding to their cost and power consumption, the team believes that the added security outweighs these potential drawbacks.

The chip employs several optimizations to ensure robust security. These measures do not impact the accuracy of computations and are particularly beneficial for demanding AI applications like augmented and virtual reality or autonomous driving.

The team focused on a type of machine-learning accelerator called digital in-memory compute (IMC). This type of chip performs computations within a device’s memory, storing parts of a machine-learning model after it has been moved from a central server. However, IMC chips can be exploited by hackers using side-channel and bus-probing attacks.

The researchers used a three-pronged approach to repel such attacks. Firstly, they split data into random pieces, making side-channel attacks incapable of reconstructing true information. Secondly, they employed a lightweight cipher to encrypt model stored in off-chip memory to prevent bus-probing attacks. Lastly, the key for decrypting the cipher was generated directly on the chip, eliminating the need for back-and-forth transfers.

According to Ashok, their tested chip resisted millions of attempted attacks during the testing phase, maintaining its robustness and integrity. In contrast, it only took about 5,000 samples to extract information from an unprotected chip.

Planning to minimize the energy consumption and size of their chip in the future, the team intends to explore methods to make it easier to scale.

This research, funded in part by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship, stresses the aspect of prioritizing security in the system’s design phase.

Leave a comment

0.0/5