Drugs taken orally must pass through the digestive tract, aided by transporter proteins found in the lining of the tract. If two drugs use the same transporter, they can interfere with each other. Addressing this issue, a team of researchers from MIT, Brigham and Women’s Hospital, and Duke University have developed a strategy to identify…
In 2010, MIT Media Lab students Karthik Dinakar and Birago Jones initiated a project to develop a tool that could assist content moderation teams at firms such as Twitter and YouTube. They were invited to showcase their innovation at a White House summit on cyberbullying, however, the demo failed to identify problematic posts because of…
Natural Language Processing (NLP) has undergone a dramatic transformation in recent years, largely due to advanced language models such as transformers. The emergence of Retrieval-Augmented Generation (RAG) is one of the most groundbreaking achievements in this field. RAG integrates retrieval systems with generative models, resulting in versatile, efficient, and accurate language models. However, before delving…
Researchers from the University of Minnesota have developed a new method to strengthen the performance of large language models (LLMs) in knowledge graph question-answering (KGQA) tasks. The new approach, GNN-RAG, incorporates Graph Neural Networks (GNNs) to enable retrieval-augmented generation (RAG), which enhances the LLMs' ability to answer questions accurately.
LLMs have notable natural language understanding capabilities,…
Scale AI's Safety, Evaluations, and Alignment Lab (SEAL) has unveiled SEAL Leaderboards, a novel ranking system designed to comprehensively gauge the trove of large language models (LLMs) becoming increasingly significant in AI developments. Solely conceived to offer fair, systematic evaluations of AI models, the innovatively-designed leaderboards will serve to highlight disparities and compare performance levels…
Researchers in the field of Artificial Intelligence (AI) have made considerable advances in the development and application of large language models (LLMs). These models are capable of understanding and generating human language, and hold the potential to transform how we interact with machines and handle information-processing tasks. However, one persistent challenge is their performance in…
Large Language Models (LLMs) are known for their ability to carry out multiple tasks and perform exceptionally across diverse applications. However, their potential to produce accurate information is inhibited, particularly when the knowledge is less represented in their training data. To tackle this issue, a technique known as retrieval augmentation was devised, combining information retrieval…
Selecting the right balance between enhancing the data set and enhancing the model parameters in a given computational budget is essential for the optimization of Neural Networks. Scaling rules assist in this allocation of strategies. Past research has recognized a 1-to-1 ratio of parameter count scaling and training token count as the most effective approach…
MIT researchers have developed an anti-tampering ID tag that provides improved security compared to traditional radio frequency ID (RFID) tags that are commonly used for authentication.
The new tag, which is smaller, cheaper, and more secure than RFIDs, uses terahertz (THz) waves for authentication. However, like traditional RFIDs, it faced a vulnerability where counterfeiters could…
Researchers from MIT, Brigham and Women’s Hospital, and Duke University have developed a strategy to understand how orally ingested drugs exit the digestive tract. The process relies on transporter proteins found in the lining of the digestive tract. Identifying the specific transporters used by various drugs can help avoid potential complications when two drugs using…
In 2010, Karthik Dinakar and Birago Jones began a project while at MIT's Media Lab, aiming to build a tool to aid content moderation teams at companies like Twitter and YouTube. This tool aimed to identify concerning posts, but the creators struggled to comprehend the slang and indirect language commonly used by posters. This led…
Large Language Models (LLMs) often exhibit judgment and decision-making patterns that resemble those of humans, posing them as attractive candidates for studying human cognition. They not only emulate rational norms such as risk and loss aversion, but also showcase human-like errors and biases, particularly in probability judgments and arithmetic operations. Despite their potential prospects, challenges…