The Australian Football League (AFL) has become embroiled in a suspected “deepfake” scandal. Allegedly explicit photos showing 45 current and former players were leaked, with the images accessible to anyone through a link to a public Google Drive folder. Commentators have suggested these images are deepfake AI-generated pornography, fabricated to mimic the appearance of the targeted individuals. While the AFL has yet to confirm if the images are indeed deepfakes, players’ representative Jimmy Bartel stated that he believed a majority of the images were fake, constructed, or staged.
The AFL Player’s Association’s CEO, Paul Marsh, said the association is aware of the AFL’s investigative efforts. He noted that while some images might not be authentic, the act of obtaining and distributing explicit images of the players without their consent is, regardless, offensive and quite possibly an illegal invasion of privacy. Marsh appealed to the public not to seek out or disseminate any of these photos, respecting the rights and privacy of those affected.
As a form of artificial intelligence, deepfakes represent a new legal frontier. The AFL case could potentially set a legal precedent in Australia, although formal regulation targeting deepfakes has yet to be drawn up. However, White Knight Lawyers of Sydney suggest that protection against deepfakes could be afforded under existing defamation laws. They posit that if a person was coerced into creating defamatory deepfake content, the depicted victim could have legal recourse to compensation for damage to their reputation. This could apply whether the deepfake showed the victim in compromising personal situations (e.g., performing sex acts) or in politically fraught or illegal activities (e.g., consuming drugs).
Importantly, regardless of whether the AFL images prove to be deepfakes, disseminating victim-identifying images without the victim’s consent is illegal in Australia. This scandal, challenging both the AFL and Australia’s legal system, underlines the potential risks posed by fast-evolving technologies. As a result, calls for regulations specifically addressing deepfakes and compensating victims may become more frequent.