Researchers from MIT and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator that provides strong data protection while allowing massive AI models to run effectively on individual devices. The innovations applied in developing the chip help protect sensitive information such as health records or financial data against common cyber-attacks, without negatively affecting the accuracy of calculations.
Some potential applications for this machine-learning accelerator chip could include augmented and virtual reality, autonomous driving, and health-monitoring applications. The key to this security enhancement lies in the ability of this chip to resist two primary types of attacks – side-channel attacks and bus-probing attacks. Side-channel attacks monitor the chip’s power consumption and reverse engineer data, whereas in bus-probing attacks, hackers steal bits of the model and dataset by probing the communication between the accelerator and the off-chip memory.
Researchers tackled these vulnerabilities using a three-pronged approach. Firstly, they fragmented the data in the IMC into random pieces, preventing a side-channel attack from reconstructing the real information. Secondly, the team used a lightweight cipher to block bus-probing attacks. This cipher encrypts the model stored off-chip, and the model segments stored on the chip are decrypted only when needed.
Lastly, they strengthened security by generating a unique decryption key directly on the chip. This key was sourced from the random variations in the chip during manufacturing, leveraging a method known as a physically unclonable function, which avoids the need to transport it back and forth with the model. The researchers also made use of the memory cells in the chip to generate this key, effectively reducing computational requirements.
Testing revealed that even after millions of attempts, the researchers could not extract any real information or access the model or dataset by using side-channel or bus-probing attacks. Moreover, they found the cipher remained unbreakable, while it took merely about 5,000 samples to steal data from an unprotected chip. The only minor trade-off was a reduction in the chip’s efficiency and an increase in the chip’s size, causing an increased fabrication cost.
Going forward, the team is exploring options to reduce energy consumption and the size of the chip, which can significantly help with large scale implementation. The study was funded by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering fellowship. This research will be presented at the IEEE Custom Integrated Circuits Conference.