The first appearances of deepfakes on video raised fears of the worst a few years ago, and the worst is indeed happening. With the democratization of creation tools, we are observing an explosion of porn deepfakes made with the faces of celebrities or anonymous people — obviously, without the authorization of the victims.
To find porn deepfakes, just ask: it is indeed very easy to access sites hosting this type of videos whose quality continues to improve. Unfortunately, the overwhelming majority of these deepfakes are made without the consent of the victims who “lend” their faces, without their knowledge. Movie stars, reality TV stars, anonymous… Anyone can be targeted, with a predominance for women.
Epidemic of non-consensual porn deeepfakes
An analysis shared by Wired gives an idea of the extent of the phenomenon. Over the past seven years, at least 244,625 non-consensual deepfakes have been uploaded to the 35 most important sites and platforms in this booming sector. Over the first nine months of the year, 113,000 videos were sent to these sites, or 54% more than for the whole of 2022. It is possible that by the end of the year, the cumulative number of videos uploaded will exceed that of all previous years.
And these figures do not take into account deepfakes shared on social networks or in messaging services. The first deepfake videos appeared at the end of 2017, and immediately researchers feared a spillover into non-consensual porn. It didn’t fail.
The tools for creating convincing deepfakes, whether online services or applications, have multiplied. And it is increasingly difficult to distinguish reality from digital manipulation. Moreover, generating fake videos is extremely simple, just a click or two.
“ It’s something that targets everyday people, everyday high school students, everyday adults — it’s become an everyday phenomenon », Laments researcher Sophie Maddocks specializing in digital rights and online sexual violence at the University of Pennsylvania.
Is the law effective against deepfakes?
“ It would make a big difference if we could make these technologies more difficult to access. It shouldn’t take two seconds to potentially incite a sex crime », she adds. Laws are in preparation, as in France: government amendments added to the SREN law regulating the digital space punish the fact of broadcasting content about a person generated by artificial intelligence without their consent, and without mentioning that it is a fake. They also create a new offense of “ publication of hyperfake (deepfake) of a sexual nature representing a person without their consent “.
While it seems unlikely that the law could prohibit the use of deepfake design tools and their dissemination, there are nevertheless possibilities to reduce this epidemic. Generative AI expert Henry Ajder explains thatadding “friction” to the process of creating these videos and discovering these types of documents could slow their spread.
Search engines could delist websites that host and share these videos; access providers could block them altogether. Google recently launched new tools allowing a user to request the removal of unwanted content about them. This will not be enough to put a definitive end to these practices, but for the victims it would be a good start.
Source :
Wired