Skip to content Skip to sidebar Skip to footer

MIT Schwarzman College of Computing

Even though we may anticipate large language models to operate similarly to humans, they do not.

Large language models (LLMs), such as GPT-3, are powerful tools due to their versatility. They can perform a wide range of tasks, ranging from helping draft emails to assisting in cancer diagnosis. However, their wide applicability makes them challenging to evaluate systematically, as it would be impossible to create a benchmark dataset to test a…

Read More

A novel artificial intelligence approach accurately interprets ambiguity in medical imaging.

Artificial intelligence (AI) tools have great potential in the field of biomedicine, particularly in the process of segmentation or annotating the pixels of an important structure in a medical image. Segmentation is critical for the identification of possible diseases or anomalies in body organs or cells. However, the challenge lies in the variability of the…

Read More

The AI model is capable of recognizing specific stages of breast tumors that have a high probability of developing into invasive cancer.

Ductal carcinoma in situ (DCIS), a type of tumor that can develop into an aggressive form of breast cancer, accounts for approximately 25% of all breast cancer diagnoses. DCIS can be challenging for clinicians to accurately categorize, leading to frequent overtreatment for patients. A team of researchers from the Massachusetts Institute of Technology (MIT) and…

Read More

An improved, speedier method to inhibit an AI chatbot from providing harmful responses.

While artificial intelligence (AI) chatbots like ChatGPT are capable of a variety of tasks, concerns have been raised about their potential to generate unsafe or inappropriate responses. To mitigate these risks, AI labs use a safeguarding method called "red-teaming". In this process, human testers aim to elicit undesirable responses from the AI, informing its development…

Read More

A fresh approach to artificial intelligence quantifies uncertainty in medical imaging.

Segmentation, a practice in biomedicine whereby pixels from a significant structure in a medical image are annotated, can be aided by artificial intelligence (AI) models. However, these models often give only one solution, while the problem of medical image segmentation usually requires a range of interpretations. For instance, multiple human experts may have different perspectives…

Read More

An improved, quicker method to stop an AI chatbot from providing harmful responses.

Artificial Intelligence (AI) Chatbots like OpenAI's ChatGPT are capable of performing tasks from generating code to writing article summaries. However, they can also potentially provide information that could be harmful. To prevent this from happening, developers use a process called red-teaming, where human testers write prompts to identify unsafe responses in the model. Nevertheless, this…

Read More

New AI technique encapsulates ambiguity in healthcare pictures.

In the realm of biomedicine, segmentation is a process where certain areas or pixels within a medical image, such as an organ or cell, are annotated or highlighted. This primarily assists clinicians in pinpointing areas showing signs of diseases or abnormalities. However, there is often a gray area since different experts can have differing interpretations…

Read More

A more efficient and improved method to inhibit AI chatbots from producing harmful responses.

AI chatbots like ChatGPT, trained on vast amounts of text from billions of websites, have a broad potential output which includes harmful or toxic material, or even leaking personal information. To maintain safety standards, large language models typically undergo a process known as red-teaming, where human testers use prompts to elicit and manage unsafe outputs.…

Read More

A fresh approach to artificial intelligence measures ambiguity in health-related imagery.

Biomedical segmentation pertains to marking pixels from significant structures in a medical image like cells or organs which is crucial for disease diagnosis and treatment. Generally, a single answer is provided by most artificial intelligence (AI) models while making these annotations, but such a process is not always straightforward. In a recent paper, Marianne Rakic, an…

Read More

An improved and speedier method to stop AI chatbot from providing harmful responses.

AI chatbots pose unique safety risks—while they can write computer programs or provide useful summaries of articles, they can also potentially generate harmful or even illegal instructions, including how to build a bomb. To address such risks, companies typically use a process called red-teaming. Human testers aim to generate unsafe or toxic content from AI…

Read More

A new technique in artificial intelligence accurately recognizes uncertainty in health imagery.

A research team from MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital has developed an artificial intelligence (AI) tool, named Tyche, that presents multiple plausible interpretations of medical images, highlighting potentially important and varied insights. This tool aims to address the often complex ambiguity in medical image interpretation where different experts…

Read More

A quicker, more efficient method to safeguard against an AI chatbot providing harmful or inappropriate responses.

To counter unsafe responses from chatbots, companies often use a process called red-teaming, in which human testers write prompts designed to elicit such responses so the artificial intelligence (AI) can be trained to avoid them. However, since it is impossible for human testers to cover every potential toxic prompt, MIT researchers developed a technique utilizing…

Read More