A team of researchers from MIT and the MIT-IBM Watson AI Lab has developed a machine-learning accelerator that is resistant to the two most common types of cyberattacks. This ensures that sensitive information such as finance and health records remain private while still enabling large AI models to run efficiently on devices.
The researchers targeted a type of machine-learning accelerator called digital in-memory compute (IMC), which performs computations within a device’s memory. Although IMC chips speed up computation by performing millions of operations simultaneously, this complexity makes them susceptible to hackers. In a side-channel attack, hackers monitor the chip’s power consumption and using statistical techniques, reverse-engineer the digital data. Another technique, the bus-probing attack, involves hackers stealing parts of a model and dataset by prying into the communication between the accelerator and the off-chip memory.
To prevent these attacks, the team devised a three-pronged approach. First, they split data in the IMC into random pieces then added random bits to further split the data. Since all pieces aren’t computed in one operation, it’s impossible for a side-channel attack to reconstruct the actual data. Second, they thwarted bus-probing attacks using a lightweight cipher that encrypts the model stored in off-chip memory. The cipher was decrypted only when necessary. Finally, they used physical variations in the chip to generate a unique key that decrypts the cipher, eliminating the need to move the model back and forth.
During testing, the researchers took on the role of hackers and attempted to steal information using the known techniques. Despite millions of attempts, they were unable to reconstruct any real information, and the cipher remained unbroken.
While the security measures reduce the chip’s energy efficiency and increases its size—making it more expensive to manufacture—the team considers the price for security worthwhile. They’re now exploring methods to reduce energy consumption and size, making the chip easier to implement while maintaining its security features. The research is sponsored in part by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship.