Microsoft has announced the delay of its new AI feature, Recall, from the forthcoming release of its Copilot Plus PCs due to security concerns. The Recall function, while initially praised for its concept of using AI to remember user actions for reminders, faced criticism for potential security vulnerabilities in its storage of user information. Cybersecurity experts deemed it a prospective security disaster.
Initially slated for release widely on June 18, 2024, Microsoft has now restricted Recall’s preview availability to the Windows Insider Program for additional testing and feedback. Specifics on the nature of the security vulnerabilities have not been disclosed, but the decision to delay suggests Microsoft is taking heed of the concerns raised.
The Recall function delay comes amid accusations of security negligence and oversight on part of Microsoft. These criticisms have emerged following a 2023 security incident where Chinese government-backed hackers breached Microsoft’s cloud servers, successfully stealing hundreds of thousands of customer emails, some of which belonged to officials from the US Departments of State and Commerce.
Microsoft President Brad Smith acknowledged the corporation’s responsibility for this breach during a recent appearance before the US House Committee on Homeland Security. Smith shared plans to foster a culture with an emphasis on security within the company. He confirmed Microsoft CEO Satya Nadella’s personal commitment to this goal.
In a company-wide message, Nadella stressed that security was paramount even when pitted against other priorities such as releasing new features or supporting legacy systems. However, this culture of prioritizing security may take time to infiltrate the entire organisation. Despite Microsoft’s intent to focus on security over AI, some remain sceptical about how long it will take to effectively change the mindset of a large corporation with a history of emphasizing product delivery over problem identification.
Indeed, while the tech mantra of ‘Move fast and break things’ encourages innovation, it also raises the question of whether companies are adequately addressing the flaws they create. As Microsoft and other tech companies advance their work on artificial intelligence, there is a growing need to prioritize security over hasty product releases to ensure user safety.