Skip to content Skip to footer

Improving Answer Selection in Community Question Answering using Question-Answer Cross Attention Networks (QAN)

Community Question Answering (CQA) platforms like Quora, Yahoo! Answers, and StackOverflow are popular online forums for information exchange. However, due to the variable quality of responses, users often struggle to sift through myriad answers to find pertinent information. Traditional methods of answer selection in these platforms include content/user modeling and adaptive support. Still, there’s room for improvement, particularly in improving the interaction between questions and answers.

A team of researchers from PricewaterhouseCoopers introduced an advanced mechanism called Question-Answer cross-attention networks (QAN), striving to enhance the performance of answer selection. The model draws in external knowledge generated by large language models (LLMs) such as chatGPT.

The QAN model integrates three layers. The initial layer uses BERT, a revolutionary algorithm in natural language processing, to extract contextual representations of question subjects, bodies, and answers. The model’s second layer, the cross-attention mechanism, is used to envisage relationships between question-answer pairs and produce similarity matrices. The final layer, the Interaction and Prediction Layer, processes interactive features and allocates labels to each answer by assessing conditional probabilities.

Incorporating bidirectional GRU (Gated Recurrent Units) for context, the layer uses max and mean pooling to extract fixed-length vectors from questions and answers. These vectors are combined to form a global representation, which is fed into an MLP (Multilayer Perceptron) classifier to evaluate the semantic equivalence in the question-answer pair.

The QAN model has managed to surpass all baseline models using three metrics, with its success attributed mainly to the pre-trained BERT model and the ingenious utilization of the attention mechanism. Validating six variants of the QAN model on the Yahoo! Answers dataset revealed the impact of different components on the performance of answer selection.

In summary, the QAN model, which integrates BERT models and a cross-attention mechanism, delivers a state-of-the-art performance in answer selection on CQA platforms. By assimilating attention to the questions and answers, it ensures an enhanced alignment between them. This new model promises higher accuracy and efficiency in answer selection by leveraging large language models to supplement knowledge-boosting practices. This advancement in CQA platforms is expected to enhance user experience by making relevant information more accessible in fewer steps.

The development of QAN, which bested other existing models in the SemEval2015 and SemEval2017 datasets, represents a significant stride in the evolution of CQA platforms and machine learning tasks involving questions and answers. It’s a testament to the continual progress in the field of natural language processing and the potential benefits of innovative research.

Leave a comment

0.0/5