Researchers from MIT and the University of Washington have developed a method to model the behaviour of an agent, including its computational limitations, predicting future behaviours by examining prior actions. The method applies to both humans and AI, and has a wide range of potential applications, including predicting navigation goals from past routes and forecasting…
In an effort to improve AI systems and their ability to collaborate with humans, scientists are trying to better understand human decision-making, including its suboptimal aspects, and model it in AI. A model for human or AI agent behaviour, developed by researchers at MIT and the University of Washington, takes into account an agent’s unknown…
Machine learning (ML) models are increasingly used by organizations to allocate scarce resources or opportunities, such as for job screening or determining priority for kidney transplant patients. To avoid bias in a model's predictions, users may adjust the data features or calibrate the model's scores to ensure fairness. However, researchers at MIT and Northeastern University…
As AI models become increasingly integrated into various sectors, understanding how they function is crucial. By interpreting the mechanisms underlying these models, we can audit them for safety and biases, potentially deepening our understanding of intelligence. Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have been working to automate this interpretation process, specifically…
Large language models (LLMs), such as GPT-3, are powerful tools due to their versatility. They can perform a wide range of tasks, ranging from helping draft emails to assisting in cancer diagnosis. However, their wide applicability makes them challenging to evaluate systematically, as it would be impossible to create a benchmark dataset to test a…
While artificial intelligence (AI) chatbots like ChatGPT are capable of a variety of tasks, concerns have been raised about their potential to generate unsafe or inappropriate responses. To mitigate these risks, AI labs use a safeguarding method called "red-teaming". In this process, human testers aim to elicit undesirable responses from the AI, informing its development…
Artificial Intelligence (AI) Chatbots like OpenAI's ChatGPT are capable of performing tasks from generating code to writing article summaries. However, they can also potentially provide information that could be harmful. To prevent this from happening, developers use a process called red-teaming, where human testers write prompts to identify unsafe responses in the model. Nevertheless, this…
AI chatbots like ChatGPT, trained on vast amounts of text from billions of websites, have a broad potential output which includes harmful or toxic material, or even leaking personal information. To maintain safety standards, large language models typically undergo a process known as red-teaming, where human testers use prompts to elicit and manage unsafe outputs.…
AI chatbots pose unique safety risks—while they can write computer programs or provide useful summaries of articles, they can also potentially generate harmful or even illegal instructions, including how to build a bomb. To address such risks, companies typically use a process called red-teaming. Human testers aim to generate unsafe or toxic content from AI…
To counter unsafe responses from chatbots, companies often use a process called red-teaming, in which human testers write prompts designed to elicit such responses so the artificial intelligence (AI) can be trained to avoid them. However, since it is impossible for human testers to cover every potential toxic prompt, MIT researchers developed a technique utilizing…
Large language models powering AI chatbots possess the potential for generating harmful content due to their exposure to countless websites, putting users at risk if the AI generates illegal activities description, illicit instructions, or personal information leakage. To mitigate such threats, AI-developing companies use a procedure known as red-teaming, where human testers compose prompts aimed…
Artificial intelligence (AI) chatbots like ChatGPT, capable of generating computer code, summarizing articles, and potentially even providing instructions for dangerous or illegal activities, pose unique safety challenges. To mitigate this risk, companies use a safeguarding process known as red-teaming, where human testers attempt to prompt inappropriate or unsafe responses from AI models. This process is…