Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab have developed a technique to enhance the safety measures implemented in AI chatbots to prevent them from providing toxic or dangerous information. They have improved the process of red-teaming, where human testers trigger unsafe or dangerous context to teach AI chatbot to…
An international team of researchers, including members from MIT (Massachusetts Institute of Technology), has developed a machine learning-based approach to predict the thermal properties of materials. This understanding could help improve energy efficiency in power generation systems and microelectronics.
The research focuses on phonons - subatomic particles that carry heat. Properties of these particles affect…
The importance of speed and efficiency in computer graphics and simulation cannot be understated. However, developing high-performance simulations that can run seamlessly on various hardware configurations remains a task filled with complexity and precision. Traditional methods may not fully exploit the potential of modern graphics processing units (GPUs), thereby inhibiting performance, especially for real-time or…
The landscape for artificial intelligence (AI) is evolving at a rapid pace, with significant changes expected to transform how humans interact with technology. The industry predicts that the traditional front-end application or interface that we currently use may soon become obsolete due to the advanced capabilities of large language models (LLMs) and emergent AI agents.
LLMs,…
Spreadsheet analysis is crucial for managing and interpreting data in the extensive two-dimensional grids used in tools like MS Excel and Google Sheets. However, the large, complex grids often exceed the token limits of large language models (LLMs), making it difficult to process and extract meaningful information. Traditional methods struggle with the size and complexity…
For AI research, efficiently managing long contextual inputs in Retrieval-Augmented Generation (RAG) models is a central challenge. Current techniques such as context compression have certain limitations, particularly in how they handle multiple context documents, which is a pressing issue for many real-world scenarios.
Addressing this challenge effectively, researchers from the University of Amsterdam, The University of…
Deep Visual Proteomics (DVP) is a groundbreaking approach for analyzing cellular phenotypes, developed using Biology Image Analysis Software (BIAS). It combines advanced microscopy, artificial intelligence, and ultra-sensitive mass spectrometry, considerably expanding the ability to conduct comprehensive proteomic analyses within the native spatial context of cells. The DVP method involves high-resolution imaging for single-cell phenotyping, artificial…
Deep Visual Proteomics (DVP) is a groundbreaking method that combines high-end microscopy, AI, and ultra-sensitive mass spectrometry for comprehensive proteomic analysis within the native spatial context of cells. By utilizing AI to identify different cell types, this technology allows an in-depth study of individual cells, increasing the precision and effectiveness of cellular phenotyping.
The DVP workflow…
Artificial intelligence (AI) advancements have led to the creation of large language models, like those used in AI chatbots. These models learn and generate responses through exposure to substantial data inputs, opening the potential for unsafe or undesirable outputs. One current solution is "red-teaming" where human testers generate potentially toxic prompts to train chatbots to…