Skip to content Skip to footer

Researchers from MIT and MIT-IBM Watson AI Lab have created a machine-learning accelerator that is resistant to the most common types of cyber attacks. The chip can hold users’ sensitive data such as health records and financial information, enabling large AI models to run efficiently on devices while maintaining privacy. The accelerator maintains strong security, has high computational accuracy, and is only slightly slowed by added security measures, making it suitable for demanding AI applications such as VR and autonomous driving.

Lead author Maitreyi Ashok, an EECS graduate student, stressed the importance of designing systems with security in mind from the start, to avoid prohibitive costs later. Although the chip makes the device slightly more expensive and less energy-efficient, Ashok believes the benefit of added security justifies this.

The design tackled the problem of side-channel and bus-probing attacks. In side-channel attacks, hackers monitor the chip’s power consumption to reverse-engineer data; bus-probing attacks involve stealing parts of the model and dataset. The researchers used three major strategies to counter these threats: splitting data into random pieces to prevent reconstruction; using a lightweight cipher to encrypt the model stored in off-chip memory; and generating the key to decrypt the cipher directly on the chip, using random variations from the manufacturing process.

Tests showed the new security measures were highly effective: over millions of attempts, none managed to reconstruct real information or extract pieces of the model. By contrast, it took just about 5,000 samples to steal information from an unprotected chip. However, the additional security did reduce energy efficiency and required a larger chip area, making it more costly to produce. The team plans to explore ways to improve these downsides in future development.

Leave a comment

0.0/5