AI-based recommender systems, which suggest products or content to users, are prevalent across various online platforms like social media and e-commerce. These systems have a significant influence on user behavior, according to a research survey from the Institute of Information Science and Technologies at the National Research Council, the Scuola Normale Superiore of Pisa, and the University of Pisa. Researchers examined the methodologies used in studying this influence, the identified impacts, and future research directions.
The survey divided the methodologies used into empirical and simulation studies. Empirical studies use real-world data to understand the interactions between users and recommenders, while simulation studies generate synthetic data through models, allowing for reproducibility and controlled experimentation. Both types of studies have observational and controlled versions, each with its pros and cons.
Empirically, researchers observed user behavior without manipulating the environment. For example, they studied YouTube’s recommendations, noting a bias towards mainstream content over extremist material. Controlled empirical studies, such as A/B tests, establish a causal relationship but are difficult to execute as they require access to platform users and their interactions. Simulation observational studies observe the influence of recommendations in a synthetic environment. They often use agent-based models to understand phenomena like echo chambers and polarization. In controlled simulation studies, conditions are manipulated to test hypotheses about recommenders.
The outcomes of AI-based recommenders were divided into categories like diversity, echo chambers and filter bubbles, polarization, radicalization, inequality, and volume.
Diversity refers to the variety of content shown to users. Some recommender systems have been found to increase diversity, while others lead to concentration, where popular items are often promoted. Echo chambers and filter bubbles refer to environments where users are exposed to information that reinforces their beliefs, often reducing exposure to diverse viewpoints.
Polarization, or segmentation of users into groups with significantly different viewpoints, is another observed outcome, particularly in social media platforms. Radicalization, wherein individuals shift towards extreme viewpoints, is observed in platforms like YouTube. Inequality in recommender systems refers to disproportionate exposure and opportunities, usually favoring popular content or content creators.
The volume of recommendations dissimilarly impacts different ecosystems and refers to the quantity of content recommended to users.
The study also identified future research directions. It proposed multi-disciplinary approaches, including sociology and psychology, to understand the societal impacts better. It called for long-term studies to evaluate the sustained effects of such systems, as well as developing recommenders that balance personalization with diversity, fairness, and ethics to mitigate negative societal impacts. The study further stressed understanding these implications for policymakers to ensure regulations protect users and offer equitable access to information and opportunities.
In conclusion, AI-based recommender systems profoundly impact human behavior. Further study is needed to address gaps and ensure the positive evolution of these systems.