They Used Her Selfies for AI Porn and Taught Others Too.

Three Arizona women have filed a landmark lawsuit after discovering AI-generated porn using their faces went viral online. The videos were created from ordinary Instagram photos, without consent.
They were not warned. They did not find it themselves. Someone they knew saw it and sent it to them.
One of the women described the moment: “I first found out because videos went viral. There are millions of videos of me and some of my friends.”
A single Instagram video had more than 16 million views. The content had been circulating for months before she knew it existed.
The women filed the case in Maricopa County Superior Court in January 2026. It is one of the first legal challenges targeting not only deepfake pornography but also the commercial systems used to teach others how to create it.
The defendants include three men and several companies. The 143-page complaint names CreatorCore LLC, AI ModelForge, FAL Features and Labels Inc., and Phyziro LLC, a Texas-based payment processor.
The lawsuit alleges that the operation ran as a subscription-based service. It guided paying members through the entire process, from selecting targets and generating AI content to monetising it on adult platforms.
It also involved engaging customers through explicit messaging while avoiding legal detection.
According to attorney Nick Brand, CreatorCore LLC claims it produces up to 1,000 influencers per week.
The defendants advised clients to target micro-influencers instead of well-known figures to reduce the risk of detection and reporting.
Co-counsel Cristina Perez noted that even private Instagram accounts offer no protection, because sometimes it is a person already approved to follow you who takes the photos.
A Lawsuit Targeting the System Behind AI Porn
The women are suing under Arizona’s revenge porn law because Arizona, like most states, has yet to ban the creation of nonconsensual sexual deepfakes altogether.
That gap in state law adds to the significance of the case. Although nearly every state has passed some form of legislation on AI deepfakes, most of these laws focus narrowly on election misinformation or child exploitation.
The commercial course-selling element in this case forces courts to grapple with a harder question: where does education end and complicity begin?
The Take It Down Act, passed last year, makes posting nonconsensual images a federal crime and requires platforms to remove them within 48 hours. But enforcement remains a serious problem.
Internet anonymity makes it hard to track creators and distributors. Sean Harrington, director of the AI and Legal Tech Studio at Arizona State University, said the government still struggles to keep up with the speed of AI misuse.
One of the plaintiffs said it simply: “I could be your mother, your sister, your friend, your coworker.”






