Despite their advancement in many language processing tasks, large language models (LLMs) still have significant issues when it comes to complex mathematical reasoning. Current methodologies have difficulty decomposing tasks into manageable sections and often lack useful feedback from tools that might supplement a comprehensive analysis. While existing methods perform well on simpler problems, they generally…
Microsoft researchers have recently introduced a new technique for evaluating conversational AI assistants: RUBICON. This technique was specifically designed to assess domain-specific Human-AI conversations by generating and assessing candidate rubrics. Tested on 100 conversations between developers and a chat-based assistant specifically designed for C# debugging, RUBICON outperformed all other alternative rubric sets, demonstrating its high…
Document Understanding (DU) involves the automatic interpretation and processing of various forms of data including text, tables, charts, and images found in documents. It has a critical role in extracting and using the extensive amounts of information produced annually within the vast multitude of documents. However, a significant challenge lies in understanding long-context documents spanning…
Large Language Models (LLMs) and multi-modal counterparts (MLLMs), crucial in advancing artificial general intelligence (AGI), face issues while dealing with visual mathematical problems, especially where geometric figures and spatial relationships are involved. While advances have been made through techniques for vision-language integration and text-based mathematical problem-solving, progress in the multi-modal mathematical domain has been limited.
A…
Researchers from the University of Maryland, Tsinghua University, University of California, Shanghai Qi Zhi Institute, and Shanghai AI Lab have developed a novel methodology named Make-An-Agent for generating policies using conditional diffusion models. This method looks to improve upon traditional policy learning that uses sampled trajectories from a replay buffer or behavior demonstrations to learn…
Large Language Models (LLMs) such as ChatGPT are transforming educational practices by providing new ways of learning and teaching. These advanced models generate text similar to humans, reshaping the interaction between educators, students, and information. However, despite enhancing learning efficiency and creativity, LLMs bring up ethical issues related to trust and an overdependence on technology.
The…
The world of machine learning has been based on Euclidean geometry, where data resides in flat spaces characterized by straight lines. However, traditional machine learning methods fall short with non-Euclidean data, commonly found in the fields such as neuroscience, computer vision, and advanced physics. This paper brings to light these shortcomings, and emphasizes the need…
Assessing the effectiveness of Large Language Model (LLM) compression techniques is a vital challenge in AI. Traditional compression methods like quantization look to optimize LLM efficiency by reducing computational overhead and latency. But, the conventional accuracy metrics used in evaluations often overlook subtle changes in model behavior, including the occurrence of "flips" where right answers…
Large language models (LLMs) can revolutionize human-computer interaction but struggle with complex reasoning tasks, a situation prompting the need for a more streamlined and powerful approach. Current LLM-based agents perform well in straightforward scenarios but struggle with complex situations, emphasizing the need for improving these agents to tackle an array of intricate problems.
Researchers from Baichuan…
Sign language research is aimed at improving technology to better understand and interpret sign languages used by Deaf and hard-of-hearing communities globally. This involves creating extensive datasets, innovative machine-learning models, and refining tools for translation and identification for numerous applications. However, due to the lack of standardized written form for sign languages, there is a…
Large language models (LLMs) like GPT-3 and Llama-2, encompassing billions of parameters, have dramatically advanced our capability to understand and generate human language. However, the considerable computational resources required to train and deploy these models presents a significant challenge, especially in resource-limited circumstances. The primary issue associated with the deployment of LLMs is their enormity,…
Spatiotemporal prediction, a significant focus of research in computer vision and artificial intelligence, holds broad applications in areas such as weather forecasting, robotics, and autonomous vehicles. It uses past and present data to form models for predicting future states. However, the lack of standardized frameworks for comparing different network architectures has presented a significant challenge.…