Skip to content Skip to footer

Microsoft’s AI presents a new Machine Learning method named CoT-Influx, that enhances the limitation of Few-Shot Chain-of-Thoughts (CoT) Learning for better mathematical reasoning in Language Learning Models (LLM).

Large Language Models (LLMs) have proven to be game-changers in the field of Artificial Intelligence (AI), thanks to their vast exposure to information and versatile application scope. However, despite their many capabilities, LLMs still face hurdles, especially in mathematical reasoning, a critical aspect of AI’s cognitive skills. To address this problem, extensive research is being conducted on enhancing Chain-of-Thought (CoT) prompts and fine-tuning LLMs to improve their reasoning skills. Yet, the full potential of few-shot learning — a promising method within this context — remains unexplored.

Recently, there have been efforts to optimize the reasoning abilities of LLMs by enhancing CoT prompts and creating CoT-based training data. Techniques have been explored to compress prompts in order to account for the issue of limited few-shot examples. Despite their effectiveness, these methods could be improved since they are sub-optimal for math reasoning and overlook token redundancy.

Researchers from Hong Kong University and Microsoft proposed CoT-Influx, a new approach to maximize the input of efficient and concise CoT examples within the existing context windows, using a coarse-to-fine pruning mechanism. This promising technique, which involves a dual-phase pruning strategy, integrates more useful CoT examples and ensures they comprise informative tokens, significantly enhancing LLM capabilities in mathematical reasoning without increasing computational overhead or complexity.

To develop and validate CoT-Influx, the team created the Math Reasoning Dataset (MRD3), a specialized set of math problems varying in difficulty and reasoning steps. The data was then used to train a specialized pruner for math reasoning tasks. Rigorous testing showed that CoT-Influx significantly improved the math-solving abilities of various LLaMA models on five different maths datasets. Its application to the LLaMA2-70B model notably resulted in a 2.5% improvement surpassing GPT-3.5 and larger models on the GSM8K dataset.

In conclusion, the study presented CoT-Influx as a potentially potent tool in substantially improving the math reasoning capabilities of LLMs like LLaMA. This development represents a significant stride forward in utilizing LLMs to solve complex mathematical problems, potentially transforming future research in AI reasoning and learning efficiency.

Leave a comment

0.0/5