As robots are increasingly being deployed for complex household tasks, engineers at MIT are trying to equip them with common-sense knowledge allowing them to swiftly adapt when faced with disruptions. A newly developed method by the researchers merges robot motion data and common-sense knowledge from extensive language models (LLMs).
The new approach allows a robot to…
Large language models (LLMs), such as those which power AI chatbots like ChatGPT, are highly complex. While these powerful tools are used in diverse applications like customer support, code generation, and language translation, they remain somewhat of a mystery to the scientists who work with them. To develop a deeper understanding of their inner workings,…
Large language models (LLMs) that power artificial intelligence chatbots like ChatGPT are extremely complex and their functioning isn't fully understood. These LLMs are used in a variety of areas such as customer support, code generation and language translation. However, researchers from MIT and other institutions have made strides in understanding how these models retrieve stored…
The recent misuse of audio deepfakes, including a robocall purporting to be Joe Biden in New Hampshire and spear-phishing campaigns, has prompted questions about the ethical considerations and potential benefits of this emerging technology. Nauman Dawalatabad, a postdoctoral researcher, discussed these concerns in a Q&A prepared for MIT News.
According to Dawalatabad, the attempt to obscure…
Audio deepfakes have recently been in the news, particularly in regards to their negative impacts, such as fraudulent robocalls pretending to be Joe Biden, encouraging people not to vote. These malicious uses could negatively affect political campaigns, financial markets, and lead to identity theft. However, Nauman Dawalatabad, a postdoc student at MIT, argues that deepfakes…
Nauman Dawalatabad, a postdoctoral researcher discusses the concerns and potential benefits of audio deepfake technology in a Q&A with MIT News. He addresses ethical considerations regarding the concealment of a source speaker’s identity in audio deepfakes, noting that speech contains a wealth of sensitive personal information beyond identity and content, such as age, gender and…
Recently, an AI-generated robocall mimicking Joe Biden urged New Hampshire residents not to vote. Meanwhile, "spear-phishers" – phishing campaigns targeting specific people or groups – are using audio deepfakes to extract money. However, less attention has been paid to how audio deepfakes could positively impact society. Postdoctoral fellow Nauman Dawalatabad does just that in a…
Audio deepfakes, or AI-generated audio, have lately been in the limelight due to harmful deception applied by ill-intentioned individuals. Cases such as robocalls impersonating political figures, spear-phishers tricking individuals into revealing personal information, and actors misusing technology to preserve their voices have surfaced in the media. While these negative instances have been widely publicized, MIT…
In this Q&A article for MIT News, postdoc Nauman Dawalatabad discusses the ethical considerations, challenges, and positive impacts of audio deepfakes - the AI-generated audio that can mimic human voices. Recently, the technology has been misused causing public concern, for example, a robocall imitating Joe Biden’s voice instructed New Hampshire residents not to vote, while…