Skip to content Skip to footer

New research on machine learning from UCLA reveals surprising inconsistencies and roughness in in-context decision boundaries of LLMs.

Advanced language models such as GPT-3+ have shown significant improvements in performance by predicting the succeeding word in a sequence using more extensive datasets and larger model capacity. A key characteristic of these transformer-based models, aptly named as “in-context learning,” allows the model to learn tasks through a series of examples without explicit training. However, in-context learning is not fully understood, prompting researchers to delve into the factors influencing its efficiency. The size and structure of the model, along with the order of examples, were found to significantly impact the results, whereas the accuracy of examples was not always necessary.

In order to further understand these methods, the paper investigates three approaches towards in-context learning in transformers and large language models (LLMs) with a series of binary classification tasks (BCTs). The first method makes a theoretical connection between in-context learning and gradient descent (GD), the second focuses on the practical understanding of numerous factors in LLMs, and the third method being “learning to learn” in-context, utilizing MetaICL, a meta-training framework for tailoring pre-trained LLMs.

Researchers at UCLA’s Department of Computer Science have introduced a unique perspective of viewing in-context learning in LLMs as a distinct machine learning algorithm. This outlook allows traditional machine tools to dissect decision boundaries in binary classification tasks, shedding light on the performance and behaviour of in-context learning in linear and non-linear contexts. This novel approach examines the general capabilities of LLMs, offering a unique viewpoint on the strength of in-context learning performance.

The researchers’ experiments centred around three key questions: assessing the performance of pre-existing LLMs on BCTs, identifying the factors influencing the decision boundaries of these models, and finding ways to improve the smoothness of these boundaries.

The study observed that finetuning LLMs with in-context examples did not necessarily result in smoother decision boundaries, calling for improved decision boundary smoothness. Different LLMs, including open-source models, were explored to understand their decision boundaries better.

In conclusion, the research proposed a unique method for understanding in-context learning, examining their decision boundaries in BCTs. The decision boundaries of LLMs were often found to be non-smooth, despite achieving high test accuracy. Trials were conducted to identify the factors affecting these boundaries, and fine-tuning and adaptive sampling methods were explored to enhance their smoothness. This study contributes to a deeper understanding of in-context learning and paves the way for future research in optimizing the decision boundaries in LLMs.

Leave a comment

0.0/5