Skip to content Skip to footer

Researchers from the MIT-IBM Watson AI Lab and MIT have developed a secure machine-learning accelerator that can efficiently run large AI models while protecting user data. The device keeps user medical records, personal finance information, and other sensitive data confidential, and it is currently resistant to two of the most common security threats. The team introduced several enhancements to maintain strong security, even with a slight device slowdown. Furthermore, the added security has no effect on computation accuracy.

Many platforms use machine-learning accelerators to expedite computation, but they are vulnerable to attackers who can steal sensitive data. The new innovations from MIT and IBM address these security challenges. The accelerator designed by the MIT-IBM Watson AI Lab team is best suited for high-resource AI applications such as AR/VR and self-driving cars.

The researchers used three strategies to protect against side-channel and bus-probing attacks. One measure involved splitting data in the IMC into random pieces to ensure a side-channel attack could never reconstruct the information. Adding a lightweight cipher encrypted the program and stored in the off-chip memory to stop bus-probing attacks. Lastly, to enhance security, the cipher’s decryption key was generated directly on the chip to eliminate the need for constant back-and-forth data movement.

However, the addition of these security features reduced the efficiency of the accelerator and necessitated a larger chip area, increasing the manufacturing costs. The team is now looking for methods to reduce energy consumption and device size to make the chip more feasible for mass production.

The research, which highlights the critical importance of designing mobile devices with security in mind, will be presented at the IEEE Custom Integrated Circuits Conference and has been part-funded by a Mathworks Engineering Fellowship, NSF, and MIT-IBM Watson AI Lab. The team conducted several tests imitating a hacker’s approach, ensuring the impossibility of reconstructing real data or extracting pieces of models or data sets.

Leave a comment

0.0/5