Large Language Models (LLMs) have shown great potential in natural language processing tasks such as summarization and question answering, using zero-shot and few-shot prompting approaches. However, these prompts are insufficient for enabling LLMs to operate as agents navigating environments to carry out complex, multi-step tasks. One reason for this is the lack of adequate training…
Text, audio, and code sequences depend on position information to decipher meaning. Large language models (LLMs) such as the Transformer architecture do not inherently contain order information and regard sequences as sets. The concept of Position Encoding (PE) is used here, assigning a unique vector to each position. This approach is crucial for LLMs to…
The improvement of logical reasoning capabilities in Large Language Models (LLMs) is a critical challenge for the progression of Artificial General Intelligence (AGI). Despite the impressive performance of current LLMs in various natural language tasks, their limited logical reasoning ability hinders their use in situations requiring deep understanding and structured problem-solving.
The need to overcome…
The existing language learning models (LLMs) are advancing yet have been struggling with incorporating new knowledge without forgetting the previous information, a situation termed as "catastrophic forgetting." The present methods, such as retrieval-augmented generation (RAG), are not very effective in tasks demanding integration of new knowledge from various passages due to encoding each passage in…
Researchers from the University of Minnesota have developed a new method to strengthen the performance of large language models (LLMs) in knowledge graph question-answering (KGQA) tasks. The new approach, GNN-RAG, incorporates Graph Neural Networks (GNNs) to enable retrieval-augmented generation (RAG), which enhances the LLMs' ability to answer questions accurately.
LLMs have notable natural language understanding capabilities,…
Researchers in the field of Artificial Intelligence (AI) have made considerable advances in the development and application of large language models (LLMs). These models are capable of understanding and generating human language, and hold the potential to transform how we interact with machines and handle information-processing tasks. However, one persistent challenge is their performance in…
Selecting the right balance between enhancing the data set and enhancing the model parameters in a given computational budget is essential for the optimization of Neural Networks. Scaling rules assist in this allocation of strategies. Past research has recognized a 1-to-1 ratio of parameter count scaling and training token count as the most effective approach…
Large Language Models (LLMs) often exhibit judgment and decision-making patterns that resemble those of humans, posing them as attractive candidates for studying human cognition. They not only emulate rational norms such as risk and loss aversion, but also showcase human-like errors and biases, particularly in probability judgments and arithmetic operations. Despite their potential prospects, challenges…
Multimodal machine learning combines various data types such as text, images, and audio to create more accurate and comprehensive models. However, large multimodal models (LMMs), like LLaVA, have been facing problems dealing with high-resolution graphics due to their inflexible and inefficient nature. Many have recognized the necessity for methods that may adjust the number of…
Retrieval-augmented generation (RAG) has been used to enhance the capabilities of large language models (LLMs) by incorporating external knowledge. However, RAG is susceptible to retrieval corruption, a type of attack in which disruptive information is inserted into the document collection, leading to the generation of incorrect or misleading responses. This poses a serious threat to…