Skip to content Skip to footer
Search
Search
Search

Stanford researchers identify illicit child imagery in the LAION dataset

The Stanford Internet Observatory recently conducted an extensive study to identify and eliminate child sexual abuse material (CSAM) in generative ML training data and models. In collaboration with the Canadian Centre for Child Protection and other anti-abuse organizations, the team reviewed the LAION database, a large-scale index of online images and captions used to train AI image generators like Stable Diffusion. The study was groundbreaking; it revealed that over 3,200 images of suspected child sexual abuse were present in the dataset, with 1,000 of those images confirmed as child sexual abuse material.

The implications of this study are far-reaching, and cause for serious concern. AI image generators have already been implicated in a number of child sex abuse and pornography cases, with a North Carolina man recently being imprisoned for 40 years after being found in possession of AI-generated child abuse imagery. Moreover, these AI models are ‘learning’ from this material, and can use it to generate entirely new content. Research has shown that people are creating these types of images and selling them on sites like Patreon.

The authors of the study, David Thiel and his team, rightly point out that taking an entire internet-wide scrape and making it into a dataset to train models is something that should have been confined to a research operation and not open-sourced without a lot more rigorous attention. The team has urged those building training sets based on LAION‐5B to either delete them or collaborate with intermediaries to cleanse the material, and recommend making older versions of Stable Diffusion, particularly those known for generating explicit imagery, less accessible online. AI companies have reacted swiftly to the Stanford study, with OpenAI clarifying that it did not use the LAION database and have fine-tuned their models to refuse requests for sexual content involving minors, and Google deciding against making their Imagen model public after an audit revealed a range of inappropriate content.

This is not the first time research has been conducted into LAION-type databases. In 2021, computer science researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe published “Multimodal datasets: misogyny, pornography, and malignant stereotypes,” which analyzed the LAION-400M image dataset. Their paper revealed that the dataset contained troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content, as well as labels which mirrored or represented conscious and unconscious bias, which, in turn, inflicted bias onto the AI models that data was used to train.

It is clear that Stanford’s study is just the beginning of a far greater issue; the presence of biased datasets and model outputs can have far-reaching implications, and not just in terms of child sex abuse. Issues such as gender-biased models rating women’s skills as lower value than men’s, discriminatory and inaccurate facial recognition systems, and even failures in medical AI systems are all caused by biased datasets. Birhane stresses that this is a systemic issue, with academic strongholds like Stanford tending to depict their research as pioneering when this often isn’t the case, and research conducted from those with diverse backgrounds and outside of the US techscape being seldom given fair credit.

The risks AI developers expose themselves to when using datasets indiscriminately and without proper due diligence are potentially enormous. As Stanford suggests, developers need to be more careful about their responsibilities when creating AI models and products. Beyond that, there is a critical need for AI companies to better engage with research communities and model developers to stress the risk of exposing models to such data. Issues like this will only become more and more pressing as AI models become increasingly powerful, and the ‘superintelligent’ AI models of the future may be exposed to such material if we do not act now. As Birhane and her co-researchers pointed out in their 2021 study, “There is a growing community of AI researchers that believe that a path to Artificial General Intelligence (AGI) exists via the training of large AI models with “all available data.” With this in mind, it is now more important than ever to ensure that the datasets used to train these models are of the highest quality, to ensure that the AI of the future is safe, ethical, and accurate.

Leave a comment

0.0/5