Last year, amid the widening influence of Silicon Valley’s AI technology, apps known as “nudify” had hit the limelight. These apps, which were accessible through open-source models, enabled users to virtually undress individuals in photographs using artificial intelligence. Mostly targeted at women, these non-consensual NSFW (Not Safe For Work) applications garnered millions of online visits.
AI companies have now taken these apps to Telegram. Bots that offer services that virtually “undress” women using AI have been established on the platform. A search for key phrases such as “undress”, “deepfake”, and “nudify” on Telegram yields numerous group chats or bots that provide AI-powered undressing services.
One such entity is a Telegram group known as “Undress.app,” maintained by the website Undress AI. Its subscriber base is over 174,000. On joining the Undress.app group, users are directed to the “Undress AI Bot.” The bot presents itself as an aide capable of deleting any unnecessary clothing present in the images. The end-user gets five free trials with this service.
Parallelly, another bot, called “Undresser Ai,” with 2,600-plus subscribers transports users to a higher level of sophistication. Using this bot, users can not only upload the photo, but also customize age, ethnicity, breast size, and photo angle of the intended result. It allows users to leverage advanced prompts to describe their ideal AI-generated undressed woman.
As of now, these bots specifically generate softcore pornographic images of women. The services remain active, evidently exploiting and violating the rights of women by portraying them non-consensually in sexually explicit images. Furthermore, it seems that the operating environment of these bots mimics a pyramid referral scheme, offering free trials for users who bring in new subscribers. It’s clear that these AI-powered undressing bots on Telegram represent a fraction of a larger controversy swirling around the increase in AI deepfakes.
Crucially, women are bearing the brunt of these AI-generated scandalous expositions, most of the time, without their consent or knowledge. A case in point from Australia recently saw a man from Tasmania imprisoned for possession of AI-generated child abuse material. This marked Tasmania police’s first such seizure. This just goes to underline the extent and gravity of misuse of AI technology and the urgent need for more robust regulation and stricter penalties.