In a keynote address at MIT’s Generative AI Week on November 28, iRobot co-founder Rodney Brooks highlighted the potential dangers of overestimating the capabilities of generative AI, an emerging technology that supports powerful tools like OpenAI’s ChatGPT and Google’s Bard. He urged that while the technology has significant capabilities, the illusion that it can solve every problem is flawed. The event centered around themes of both hope and caution, exploring how generative AI, which refers to machine-learning models that can generate content reflecting the data they were trained on, could improve the world when used responsibly.
MIT President Sally Kornbluth opened the symposium by showcasing several generative AI projects aimed at positive societal impact, such as the Axim Collaborative’s exploration of generative AI’s educational uses for underserved students. Throughout Generative AI Week, MIT aimed to promote collaboration across academia, policymakers, and industries to safely incorporate this rapidly evolving technology into society.
The symposium also covered the blurred line between science fiction and reality created by powerful machine learning models. CSAIL Director Daniela Rus spoke about how the questions have now shifted from the ability of machines to create new content to the best ways to utilize these tools for business enhancement and sustainability.
Despite the impressive capabilities of generative AI tools like ChatGPT, Brooks emphasized that they are still just tools and not magical solutions. He warned of the potential risks of discarding important research to follow generative AI trends and of the misguided expectation that this technology will lead to artificial general intelligence. His concerns revolve around researchers ditching potentially groundbreaking work, venture capitalists rushing towards high-margin technologies, and a generation of engineers overlooking other AI forms.
In the ensuing discussion, it became clear that while different parties might view generative AI as either a solution or a source of problems, both sides tend to overestimate the technology. A panel discussion included professors from various MIT departments and centered on future advances, overlooked research areas, and AI regulation and policy. The panelists pointed out the importance of integrating perceptual systems into AI, involving policymakers and the public in shaping the technology, and the potential risks of “digital snake oil” – products that might claim to achieve miracle results but be harmful in the long run.
The symposium concluded with a discussion focusing on developing generative AI models that can surpass human abilities, such as detecting emotions by observing a person’s breath and heart rate changes. It was, however, noted that the secure integration of such tools into society requires user trust in their performance meeting the promised specifications.