Pornographic deep fakes of pop star icon Taylor Swift have sparked calls in the US for new legislation on the use of artificial intelligence.
The sexually explicit images were posted on social media sites and viewed up to 47 million times before being taken down.
She's one of the most talked about women on the planet right now, but Swift's fame has also made her a popular figure to target for sexually explicit deepfakes online.
The AI-generated images of the singer/songwriter were shared on platforms like X, formerly known as Twitter. And they were viewed some 47 million times before being taken down.
"These tools have become more accessible, they've become easier to use and they've become much better at the product they put out so really everyone who is online is vulnerable to this," digital harm expert Mandy Henk said.
Despite growing calls in the US to criminalise the creation and distribution of such images, the White House is putting the pressure back on social media outlets.
"We believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people," White House spokesperson Karine Jean-Pierre said.
Henk said a collaborative approach is needed, and politicians here, have a responsibility to protect women and girls who are disproportionately affected by fake pornography.
"The Harmful Digital Communications Act does not explicitly cover deepfake images so the Act itself needed to be updated," she said.