Researchers at the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab have developed a machine learning accelerator chip that is resistant to the most common types of cyberattacks, ensuring data privacy while supporting efficient AI model operations on devices. The chip can be used in demanding AI applications like augmented and virtual reality and autonomous driving.
Maitreyi Ashok, lead author of the study and electrical engineering and computer science (EECS) graduate student at MIT, emphasized that building security measures at the design stage is cost-effective compared to adding them later. The team managed to balance several factors in the chip design, resulting in robust security that slightly reduces device efficiency but doesn’t affect computational accuracy.
The research targets a digital in-memory compute (IMC), a type of machine-learning accelerator. Digital IMC chips carry computations in device memory, where parts of the machine-learning model are stored after being transferred from a central server. This method decreases the amount of data shifted back and forth between the device and the server. However, these chips are prone to potential cyber threats, including side-channel attacks that reverse-engineer data through power consumption monitoring, and bus-probing attacks that spy on the communication between the chip and the external memory to extract data.
The MIT team countered these threats using a three-pronged strategy. First, they split data into random bits to make reverse engineering difficult. Second, to prevent bus-probing attacks, they used a lightweight cipher to encrypt the model stored in the off-chip memory. Third, to strengthen security, they created a unique decryption key within the chip instead of transferring it between the chip and the model. This key was generated using physical variations in the chip arising during production.
The team challenged the chip’s robustness by trying to hack into it and extract data through side-channel and bus-probing attacks. They failed to recover any genuine information, and the cipher stayed secure. By comparison, they could retrieve data from an unsecured chip with around 5,000 attempts.
However, the addition of security measures increased the chip’s size, made it less energy-efficient, and more expensive. The researchers are looking into techniques to reduce these issues, which would ease implementation at a larger scale. Anantha Chandrakasan, a senior author of the study, highlighted the significance of their work, emphasizing that machine learning workload security design plays a crucial role in designing future mobile devices. Since these can be challenging and expensive to implement, the team hopes future work can find a balance between security, implementation, and cost.