Skip to content Skip to footer

Researchers from Tsinghua University Suggset ADELIE: Improving Information Extraction by Using Aligned Extensive Language Models Focused on Human-Centric Tasks.

Information extraction (IE) is a crucial aspect of artificial intelligence, which involves transforming unstructured text into structured and actionable data. Traditional large language models (LLMs), while having high capacities, often struggle to properly comprehend and perform detailed specific directives necessary for effective IE. This problem is particularly evident in closed IE tasks that require adherence to strict, predefined schemas.

IE tasks require models to identify and categorize text following specific predefined structures. Named entity recognition and relation classification are such examples. Existing LLMs, however, often struggle to comprehend and align exactly with these structures. Traditionally, researchers have employed strategies such as prompt engineering to guide LLMs through it, providing detailed annotations and instructions without altering the underlying model parameters.

To solve this problem, a team of researchers from Tsinghua University proposed a new method known as ADELIE (Aligning large language moDELS on Information Extraction). They developed ADELIE due to the critical need for a methodology that enhances the understanding of structurally-defined tasks by LLMs and improves accuracy. ADELIE leverages a unique dataset, IEInstruct, comprising over 83,000 instances across various IE formats, including triplets, natural language responses, and JSON outputs.

Different from traditional methods, ADELIE combines supervised fine-tuning with an innovative Direct Preference Optimization (DPO) strategy. This coupling enables the model to better align with the intricacies of human-like IE processing. Initial training makes use of a blend of IE-specific and generic data, using the LLAMA 2 model across over 6,000 gradient steps to ensure a balance between broad language capabilities and specialized IE performance.

Performance metrics showed that ADELIE models (ADELIESFT and ADELIEDPO) achieved impressive results, setting a new benchmark. Evaluation against held-out datasets showed ADELIESFT improving the average F1 scores by 5% compared to standard LLM outputs in closed IE tasks. In the context of open IE, ADELIE models performed even better, outpacing state-of-the-art counterparts by 3-4% in terms of robustness and extraction accuracy. Moreover, in on-demand IE, the models demonstrated an improved understanding of user instructions, translating into more accurate data structuring.

In conclusion, ADELIE’s methodical training and optimization methods allowed for superior alignment of LLMs with IE tasks. Their approach showed that a focused approach to data diversity and instruction specificity can bring the performance of machines closer to human expectations without compromising on the general capabilities. The impressive performance of the model across different task types and metrics highlights the potential of ADELIE becoming a crucial tool in a variety of applications, from academic research to real-world data processing.

Leave a comment

0.0/5