Skip to content Skip to footer

Authorities in Australia commence a probe into blatant AI deep fake incidents.

Australian police are investigating the dissemination of AI-generated pornographic images featuring around 50 schoolgirls, reportedly created by a teenage boy. The mother of a 16-year-old girl targeted in this appalling case spoke to ABC about her daughter’s distress after seeing the explicit images. The school involved, Bacchus Marsh Grammar, has pledged a strong commitment to student welfare, offering counseling and cooperating with the inquiry.

The disturbing incident occurs as the Australian government advocates stronger laws against involuntary explicit deep fakes, proposing punishment of up to seven years for creating or sharing child sexual abuse material (CSAM), whether AI-generated or not. The use of AI tools, particularly text-to-image generators like Stability AI, to create new CSAM is reportedly on the increase, often targeting past child abuse survivors and posting the horrific images on the dark web.

A 2023 report from the Internet Watch Foundation disclosed that over a month-long period, more than 20,000 AI-generated CSAM images were found on a single dark web forum. These graphic images were often indistinguishable from real photos, showing extremely upsetting content such as the simulated assault of babies and small children. A Stanford University report last year disclosed that real images of child sexual abuse had been included in the LAION-5B database, used to train widely-used AI tools, which led to a surge in AI-generated CSAM after the database was made available openly.

Recent legal cases demonstrate the real and present danger of this alarming trend. Last April, a Florida man was charged with employing AI to create explicit images of a child in his neighborhood. Furthermore, a child psychiatrist in North Carolina was sentenced to 40 years in prison for generating AI-produced child pornography featuring his patients.

Existing laws against computer-generated CSAM are insufficient, according to lawmakers and advocates. A bipartisan bill in the US is proposing to allow victims to take legal action against producers of non-consensual explicit deep fakes. Legal gray areas need to be addressed, such as when an activity breaks the law.

Tech companies that develop AI image generators ban their use for producing illegal content. But, with numerous potent AI models open-source and capable of running offline in private, this disturbing trend can’t be easily stemmed. Furthermore, much criminal activity has shifted to encrypted messaging platforms, which makes detection even more complicated. These cases highlight the growing urgency for more stringent laws and measures to protect against this malicious use of AI technology.

Leave a comment

0.0/5