Skip to content Skip to footer

Rodney Brooks, co-founder of iRobot and keynote speaker at MIT’s “Generative AI: Shaping the Future” symposium, warned attendees not to overestimate the capabilities of this emerging AI technology. Generative AI is used to create new material by learning from data they were trained on, with applications in art, creativity, functional coding, language translation and realistic image creation. However, Brooks cautioned that no one technology surpasses all others and that hype around any tool should be tempered with understanding and sound application.

The symposium was part of Generative AI Week, a series of events aiming to foster more conversation around the potentials and pitfalls of generative AI. It was attended by hundreds of people from academia and industry. In her opening remarks, MIT President Sally Kornbluth highlighted projects undertaken by faculty and students at MIT where generative AI has been used for positive social impact, such as the Axim Collaborative, an online education initiative aimed at helping underserved students. She also spoke of 27 interdisciplinary faculty research projects being funded by the Institute that focus on the transformative potential of AI on society.

The event was intended to generate collaboration among various stakeholders, considering the importance of effective integration of rapidly evolving technologies like generative AI in a safe, humane way. This collaboration and community building is seen as integral to the mission of MIT. Using generative AI as a force for good was also a topic covered by Daniela Rus, CSAIL Director, during her opening remarks.

A session led by Brooks dived deeper into generative AI capabilities. It started by breaking down how models like OpenAI’s ChatGPT work. To illustrate its vast learning capacity, he showed two sonnets: one written by him and another by ChatGPT. He argued, however, that despite their impressive output, such models do not understand the text they produce, they have not surpassed human intelligence and they are not an all-in-one solution.

A panel discussion followed, where faculty members discussed potential research directions, regulatory and policy challenges, as well as responsible deployment of generative AI tools. Armando Solar-Lezama, associate director of CSAIL, talked about the risk of “digital snake oil,” where products claiming miraculous output through generative AI could end up being harmful.

The event concluded with highlighting the possibility of developing generative AI models that can go beyond human capabilities, touching upon the need for trust, dependable specifications, and responsible use. The session demonstrated the emphasis on understanding the nuances of generative AI, challenging its conceits, and responsibly navigating its potential.

Leave a comment

0.0/5