Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator that provides security against the two most common types of attacks. This chip can keep sensitive data, such as health records or financial information, private while allowing AI models to run efficiently on devices. The increased security doesn’t affect the accuracy of computations and could be beneficial for demanding AI applications like augmented and virtual reality or autonomous cars.

Using optimizations developed by the team, this enhanced security only slightly slows down the device, but its implementation could make devices marginally more expensive and less energy-efficient. However, the lead author, Maitreyi Ashok, an EECS student at MIT, believes that these are reasonable costs when considering the value of privacy and safety.

The team targeted a type of machine-learning accelerator, digital in-memory compute (IMC), which is usually vulnerable to cyber-criminals. They adopted a three-fold means of blocking common attacks, involving randomly splitting data, executive lightweight encryption on model stored in off-chip memory, and creating a decryption key for the cipher directly on the chip.

The researchers posed as hackers to test their chip for vulnerabilities and were unable to extract useful information or break the cipher, even after millions of attempts. The secure versions of the accelerator do reduce energy efficiency and require more space inside the device, but future work could potentially reduce these impacts. They hope to eventually develop a balance between cost, ease of implementation, and the level of security achieved.

Leave a comment

0.0/5