Vision-Language Models (VLMs) provide state-of-the-art performance across a spectrum of vision-language tasks, including captioning, object localization, commonsense reasoning, and vision-based coding, amongst others. Recent studies, such as one undertaken by Apple, showed that these models excel in extracting text from images and interpreting visual data, including tables and charts. However, when tested on complex tasks…
Large language models (LLMs) like GPT-3 have proven to be powerful tools in solving various problems, but their capacity for complex mathematical reasoning remains limited. This limitation is partially due to the lack of extensive math-related problem sets in the training data. As a result, techniques like Instruction Tuning, which is designed to enhance the…
When developing machine learning (ML) models with pre-existing datasets, professionals need to understand the data, interpret its structure, and decide which subsets to use as features. The significant range of data formats poses a barrier to ML advancement. These may include text, structured data, photos, audio, and video, to name a few examples. Even within…
The progress in Language Learning Models (LLMs) has been remarkable, with innovative strategies like Chain-of-Thought and Tree-of-Thoughts augmenting their reasoning capabilities. These advancements are making complex behaviors more accessible through instruction prompting. Reinforcement Learning from Human Feedback (RLHF) is also aligning the capabilities of LLMs more closely with human predilections, further underscoring their visible progression.
In…
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have created a unique benchmark system known as INSTRUCTIR to improve the fine-tuning of Large Language Models (LLMs). The goal is to enhance these models' response to individual user preferences and instructions across a variety of generative tasks.
Traditionally, retrieval systems have struggled to…
Deep reinforcement learning (RL) heavily relies on value functions, which are typically trained through mean squared error regression to ensure alignment with bootstrapped target values. However, while cross-entropy classification loss effectively scales up supervised learning, regression-based value functions pose scalability challenges in deep RL.
In classical deep learning, large neural networks show proficiency at handling classification…
Recent research highlights the value of Selective State Space Layers, also known as Mamba models, across language and image processing, medical imaging, and data analysis domains. These models are noted for their linear complexity during training and quick inference, which notably increases throughput and facilitates the efficient handling of long-range dependencies. However, challenges remain in…
Despite remarkable advances in large language models (LLMs) like ChatGPT, Llama2, Vicuna, and Gemini, these platforms often struggle with safety issues. These problems often manifest as the generation of harmful, incorrect, or biased content by these models. The focus of this paper is on a new safety-conscious decoding method, SafeDecoding, that seeks to shield LLMs…
The application of machine learning, particularly generative models, has lately become more prominent due to the advent of diffusion models (DMs). These models have proved instrumental in modeling complex data distributions and generating realistic samples in numerous areas, including image, video, audio, and 3D scenes. Despite their practical benefits, there are gaps in the full…
The challenges associated with training large language models (LLMs) given their memory-intensive nature can be significant. Traditional methods for reducing memory consumption frequently involve compressing model weights, commonly leading to a decrease in model performance. A new approach being called Gradient Low-Rank Projection (GaLore) is now being proposed by researchers from various institutions, including the…
SynCode, a versatile framework for generating syntactically correct code in various programming languages, was recently developed by a team of researchers. The framework works seamlessly with different Large Language Models (LLMs) decoding algorithms such as beam search, sampling, and greedy.
The unique aspect of SynCode is its strategic use of programming language grammar, made possible…
The growth of artificial intelligence, particularly in the area of neural networks, has significantly enhanced the capacity for data processing and analysis. Emphasis is increasingly being placed on the efficiency of training and deploying deep neural networks, with artificial intelligence accelerators being developed to manage the training of expansive models with multibillion parameters. However, these…