In a bid to tackle the problem of item counterfeiting, researchers at MIT have taken a significant step forward in developing a microscopic, cheap and secure cryptographic ID tag. This tiny tag, which uses terahertz waves and is notably smaller, less expensive, and safer than conventional radio frequency tags (RFIDs), was initially found to have…
Researchers from MIT, Duke University, and Brigham and Women’s Hospital have designed an innovative strategy to identify the specific transporters that different drugs utilize. The study could potentially improve patient treatment as it uncovered that certain common drugs can interfere with each other if they rely on the same transporter. The process is based upon…
In 2010, Karthik Dinakar and Birago Jones, students at the Media Lab, teamed up to create a tool designed to aid content moderation teams at companies including Twitter and YouTube. The project generated widespread interest and soon they were invited to the White House to demonstrate their technology designed to identify concerning posts on these…
Pipecat: A Publicly Accessible Platform for Audio and Multimodal Interactive Artificial Intelligence
Pipecat is an innovative framework designed specifically to streamline the construction of voice and multimodal conversational agents. These applications can range across personal coaching systems, meeting assistants, children's storytelling toys, customer support bots, and social companions. The standout feature of Pipecat is its ability to allow developers to initiate projects on a small scale on…
The worldwide wearables industry is projected to grow at a compound annual growth rate (CAGR) of 18% by 2026, with new strides in development particularly within health monitoring, fitness tracking, and the capabilities of virtual assistants. Artificial intelligence (AI) appears likely to enhance the functionality and performance of wearables in the future, with the caveat…
Language models (LMs) are key components in the realm of artificial intelligence as they facilitate the understanding and generation of human language. In recent times, there has been a significant emphasis on scaling up these models to perform more complex tasks. However, a common challenge stands in the way: understanding how a language model's performance…
Large language models (LLMs) such as GPT-4 have been proven to excel at language comprehension, however, they struggle with high GPU memory usage during inference. This is a significant limitation for real-time applications, such as chatbots, due to scalbility issues. To illustrate, present methods reduce memory by compressing the KV cache, a prevalent memory consumer…
As AI models become increasingly vital for computing functionality and user experience, the challenge lies in effectively integrating them into smaller devices like personal computers without major resource utilization. Microsoft has developed a solution to this challenge with the introduction of Phi Silica, a small language model (SLM) designed to work with the Neural Processing…
With an alarming rise in the use of machine learning (ML) models in high-stakes societal applications come growing concerns about their fairness and transparency. Instances of biased decision-making have caused an increase in distrust among consumers who are subject to decisions based on these models. The demand for technology that allows public verification of fairness…