Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator capable of maintaining user privacy while running large AI models efficiently on devices. Although it might increase device cost and reduce energy efficiency, lead author Maitreyi Ashok, an electrical engineering and computer science (EECS) graduate student at MIT, believes these are acceptable trade-offs for enhanced security. The accelerator can protect a user’s sensitive information, such as health records and financial data.

The research targeted a type of machine-learning accelerator known as digital in-memory compute (IMC), which executes computations within a device’s memory, where portions of a machine-learning model are stored after being transferred from a central server. These IMC chips lessen data movement but are susceptible to hackers.

To combat this vulnerability, the team used a three-pronged approach. They first utilized a security measure that splits data in the IMC into random pieces, making it impossible for a side-channel attack to reconstruct real information. Secondly, they averted bus-probing attacks by employing a lightweight cipher that encrypts the model stored in off-chip memory, only decrypting parts of the model when necessary. Lastly, to enhance security they created a key to decrypt the cipher directly on the chip rather than transferring it with the model.

Despite its security measures infringing slightly upon the accelerator’s energy efficiency and necessitating a larger chip area, which could increase fabrication costs, the chip’s creators view its potential benefits as outweighing these issues. Meanwhile, future research could bring about energy consumption and size reductions that would simplify large-scale implementation.

The research performed to date was partially funded by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship. Embarking on an assumed role of hackers to test their chip, the researchers found that after millions of attempts, they were unable to reconstruct any genuine information or extract parts of the model or dataset, while the cipher remained unbroken.

Leave a comment

0.0/5