Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator that enhances the security of health-tracking apps. These apps can be slow and consume a lot of energy due to the data exchange requirements between the phone and a central server. “Machine-learning accelerators” are used to speed up such apps but these are vulnerable to attacks.

The researchers’ new chip, resistant to the two most common kinds of attacks, protects sensitive data efficiently while running complex AI models on devices. Balancing security with functionality, this chip can slow down the device slightly without impacting the accuracy of computations. It would be particularly beneficial for sophisticated AI applications like augmented and virtual reality or autonomous driving.

Though slightly more expensive and less energy-efficient, this added layer of security would be a worthwhile trade-off, according to lead author, Maitreyi Ashok, an EECS graduate student at MIT. The research stresses the importance of designing systems with security in mind from the outset.

The new accelerator targets a machine-learning accelerator called digital in-memory compute. This chip performs computations inside a device’s memory. Although the whole model is too large to store on the device, the IMC chips limit the amount of data that needs to be transported back and forth by breaking it into pieces and reusing them wherever possible. However, in the process, they get exposed to hackers who could reverse-engineer data.

The research team took a three-pronged approach to tackle this issue and effectively counter both side-channel and bus-probing attacks.

Firstly, they split data in the IMC into arbitrary small parts, preventing a side-channel attack from reconstructing the genuine information. The method does not require random bits for splitting data.

Secondly, they employed a lightweight cipher to encrypt the model stored in off-chip memory to thwart bus-probing attacks. The cipher is decrypted only when necessary.

Finally, they generated the key needed to decrypt the cipher directly on the chip, preventing it from being transferred back and forth with the model. They leveraged the glitches in the chip’s memory cells to generate the key with less computation.

In testing their chip, the researchers tried to hack into it to steal secret information. Even after millions of attempts, they were unsuccessful in decoding any precise information or extracting parts of the model or dataset.

The team is now looking into refining their tech to minimise its energy consumption and size, aiming to make it easier and more cost-efficient for large-scale implementation.

Leave a comment

0.0/5