Skip to content Skip to sidebar Skip to footer

School of Engineering

An improved, quicker method to inhibit an AI chatbot from providing harmful responses.

Researchers from the Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab have developed a new technique to improve "red-teaming," a process of safeguarding large language models, such as AI chatbot, through the use of machine learning. The new approach focuses on the automatic generation of diverse prompts that result in undesirable responses…

Read More

A novel approach in AI successfully identifies ambiguity in medical imaging.

Researchers from MIT, in collaboration with the Broad Institute of MIT and Harvard and Massachusetts General Hospital, have introduced a new artificial intelligence (AI) tool known as Tyche, which can provide multiple, plausible image segmentation possibilities for a given medical image. Unlike conventional AI tools, which typically offer a single definitive interpretation, Tyche generates a…

Read More

Developing and validating robust systems controlled by artificial intelligence in a systematic and adaptable manner.

Neural networks have been of immense benefit in the design of robot controllers, boosting the adaptive and effectiveness abilities of these machines. However, their complex nature makes it challenging to confirm their safe execution of assigned tasks. Traditionally, the verification of safety and stability are done using Lyapunov functions. If a Lyapunov function that consistently…

Read More

A novel artificial intelligence approach has been developed to recognize ambiguity in medical imaging.

Biomedicine often requires the annotation of pixels in a medical image to identify critical structures such as organs or cells, a process known as segmentation. In this context, artificial intelligence (AI) models can be useful to clinicians by highlighting pixels indicating potential disease or anomalies. However, decision-making in medical image segmentation is frequently complex, with…

Read More

Developing and confirming robust AI-operated systems using thorough and adaptable methods.

Researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an algorithm to mitigate the risks associated with using neural networks in robots. The complexity of neural network applications, while offering greater capability, also makes them unpredictable. Current safety and stability verification techniques, called Lyapunov functions, do not…

Read More

An improved, more efficient method to prohibit an AI chatbot from producing harmful responses.

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab have developed a technique to enhance the safety measures implemented in AI chatbots to prevent them from providing toxic or dangerous information. They have improved the process of red-teaming, where human testers trigger unsafe or dangerous context to teach AI chatbot to…

Read More

The AI technique dramatically accelerates the prediction of thermal characteristics of materials.

An international team of researchers, including members from MIT (Massachusetts Institute of Technology), has developed a machine learning-based approach to predict the thermal properties of materials. This understanding could help improve energy efficiency in power generation systems and microelectronics. The research focuses on phonons - subatomic particles that carry heat. Properties of these particles affect…

Read More

An improved, quicker method to restrict an AI chatbot from delivering harmful replies.

Read More

An improved and quicker method to stop an AI chatbot from providing harmful reactions.

Artificial intelligence (AI) advancements have led to the creation of large language models, like those used in AI chatbots. These models learn and generate responses through exposure to substantial data inputs, opening the potential for unsafe or undesirable outputs. One current solution is "red-teaming" where human testers generate potentially toxic prompts to train chatbots to…

Read More

Methods for evaluating the dependability of a multi-functional AI model prior to its implementation.

Foundation models, or large-scale deep-learning models, are becoming increasingly prevalent, particularly in powering prominent AI services such as DALL-E, or ChatGPT. These models are trained on huge quantities of general-purpose, unlabeled data, which is then repurposed for various uses, such as image generation or customer service tasks. However, the complex nature of these AI tools…

Read More

A novel computational method could simplify the process of creating beneficial proteins.

MIT researchers have developed a computational model that helps predict mutations leading to better proteins, based on a relatively small dataset. In the current process of creating proteins with useful functions, scientists usually start with a natural protein and put it through numerous rounds of random mutation to generate an optimized version. This process has led…

Read More

A novel computational method may simplify the process of designing beneficial proteins.

In a search to create more effective proteins for various purposes, including research and medical applications, researchers at MIT have developed a new computational approach aimed at predicting beneficial mutations based on limited data. Modeling this technique, they produced modified versions of green fluorescent protein (GFP), a protein found in certain jellyfish, and explored its…

Read More