Skip to content Skip to footer

Analysis-LLM: An Inclusive AI Structure for Customized Feedback Creation Utilizing Massive Language Models and User Past Records in Recommendation Systems

The generation of personalized reviews within recommender systems is a burgeoning area of interest, especially in creating bespoke reviews based on users’ past interactions and choices. This process involves leveraging data from users’ previous purchases and feedback to produce reviews that genuinely reflect their unique preferences and experiences, thereby improving the competency of recommender systems.

Several researchers have tackled generating personalized reviews matching users’ experiences and preferences due to the challenge of minimal reviews from users as many only give ratings after purchases. This scarcity of detailed feedback induces unique methods to ensure the generated reviews truly mirror users’ authentic feelings.

Customarily, review generation methods use encoder-decoder neural network frameworks, often exploiting attributes such as user and item IDs and ratings. Current quirks have brought in textual information from item titles and historical reviews to escalate the quality of the review generated. Models like ExpansionNet and RevGAN have been contrived to integrate phrase information and sentiment labels into the review generation process, enriching the relevance and customization of each review.

A new framework called Review-LLM has been presented by researchers from Tianjin University and Du Xiaoman Financial, aimed at mobilizing LLMs, including Llama-3. This framework aggregates user historical behaviors to construct input prompts that capture different user interests and review styles. It uses comprehensive input – user’s historical interactions, item titles, reviews and ratings – to help LLMs understand user preferences better and generate accurate and personalized reviews.

Review-LLM’s performance was evaluated with metrics such as ROUGE-1, ROUGE-L, and BertScore, and the results showed it outperformed existing models like GPT-3.5-Turbo and GPT-4o. Particularly commendable was the framework’s ability to generate negative reviews when users were dissatisfied. The model’s effectiveness was further confirmed through a human evaluation component involving 10 PhD students familiar with review/text generation.

In conclusion, the Review-LLM framework aggregates detailed user historical data and employs a fine-tuning process, resulting in highly personalized reviews that accurately reflect user preferences. This work showcases the potential for LLMs to significantly improve the quality and personalization of reviews in recommender systems, surmounting the existing challenge of generating meaningful and user-specific reviews.

Leave a comment

0.0/5