Skip to content Skip to footer

Tech firms ink agreement to “Create AI for an Improved Future” for everyone.

In an open letter titled “Build AI for a Better Future,” tech leaders have committed to using artificial intelligence (AI) to improve lives and solve global challenges. The letter, signed by 179 entities including Google, Microsoft, Salesforce, and OpenAI, emphasizes AI’s potential to transform daily life and work, comparing this potential to revolutionary historical developments like the printing press and the internet.

The document outlines a vision in which AI is a catalyst for human progress, promoting learning through AI tutors, breaking down language barriers with translation tools, advancing healthcare with diagnostic aids, accelerating scientific research, and simplifying day-to-day tasks with AI assistants. The letter invites all to shape the future of AI, suggesting that everyone has something to offer, whether through development, application, or sharing feedback on AI’s impact.

However, the reception to the letter has been decidedly chilly, with criticisms aimed at its 100-word length and lack of in-depth discussion of AI’s trajectory and impacts. Some critics have labeled it as “PR junk,” pointing out that it makes no mention of AI safety, fails to address risks associated with general AI, and overlooks significant issues like the disruption of jobs and the potential for geopolitical arms races.

This open letter is just one among many cross-industry agreements involving AI. For example, Big Tech has teamed up with the MLCommons to establish safety benchmarks, while the Center for AI Safety (CAIS) released a statement in 2023 comparing AI risks to those of pandemics and nuclear war.

Generative AI, one type of AI technology, has recently come under the spotlight amid concerns about intellectual property, ethical use, and the potential for increasing Big Tech’s dominance. There are legal and ethical questions surrounding the use of copyrighted material to train AI models, with some companies arguing that this constitutes “fair use”—a debated and yet-to-be-tested claim.

Moreover, there is concern regarding tech companies’ rapid roll-out of AI-powered products without adequately addressing flaws, such as harmful biases, copyright violations, and security vulnerabilities. Open letters like this may show a degree of awareness of such issues, but critics argue that more action is needed to mitigate these related risks.

Leave a comment

0.0/5