Mutually exclusive? Girls rights and AI Deep fake pornography
A blog by Jennifer Ukachukwu Amarachi, Young Expert at GHRH
*Disclaimer: This article discusses sensitive content surrounding AI and deep fake pornography.*
You cannot deny it, society has come leaps and bounds with technology. It has allowed for increased global collaboration, the improvement of transport worldwide and better medical practices. But with many successes comes many pitfalls, today it is the increasing use and spread of artificial intelligence (AI) deep fake pornography online.
Deep fakes are a form of ‘synthetic’ media created by artificial intelligence where a person’s likeliness is digitally altered or put into various images, audios, and videos (Somers, 2020). Such AI has been used to groom and extort young children and to produce images of child sexual abuse. 2024 has inarguably been the year of AI image-based sexual violence and this is not to be heralded. Research published by Ofcom (2024) in July this year, revealed that around 43% of people above the age of 16 had seen at least one deepfake in the past 6 months. 14% of those who had seen deepfakes said that they had seen a sexual deepfake; 15% seeing someone who they knew; 6% seeing themselves and 17% believing the deepfakes were of one people not up to 18 years old.
The UK-based charity NSPCC Childline service received reports from young girls below the age of 16 of their images online being used to create fake nudes of them and being shared into group chats or directly to them (Saner, 2024). Earlier this year Karl Marshall was jailed for two years plus for creating and sharing sexually explicit deepfakes of 266 real women and children via AI between July 2023 and January 2024.
South Korean journalist Ms Ko Narin exposed one of the biggest AI deepfake scandals to date, online rooms and group chats dedicated to making deep fakes of girls who were known by the perpetrators ranging from middle(primary) schools to universities. Around 500 educational institutions were identified as targets and many of the victims were believed to be below the age of South Koreas age of consent, 16. Ms Kou found that every minute people were requesting for deepfakes to be made of girls and that these would be made within seconds (Mackenzie and Choi, 2024).
These cases only show a proportion of what is going on in the world today with deep fakes. The unregulated use and easy accessibility to AI platforms that create deep fakes has allowed for a rise in image-based sexual violence and it is significantly impacting young girls infringing on their rights. Despite the non-physicality of deepfakes it is founded on the same intersectional and structural gender-based inequalities and shares in the same serious mental and physical consequences to its victims. Victims shared their frustration in having to censor their use of social media and how their bodily integrity had been violated. The nature of cyber space makes it quite difficult to remove images and videos from the internet completely, such violations have ‘eternal’ effects implicating the victim’s future in regard to their professionally, socially and in terms of their health.
As seen in the cases shared above such violations impede on the victim’s ability to go to school without fear of bullying, their freedom of expression without fearing that more deepfakes will be generate, their sexual privacy, bodily integrity and reputation (United Nations General Assembly, 1948; United Nations General Assembly, 1979; (United Nations General Assembly , 1989).
Due to lack of regulation, the continuous growth of deep fake communities and easy access to deepfake creation tools online sexual violence orchestrated via AI deep fakes is increasing. Hence, we must ask ourselves given the ever developing and changing nature of cyber space and out technological capacities, how can we ensure that such things are not weaponised against girls/women and their rights.
When a state signs and ratifies the international human rights treaties the state has a duty to protect, fulfil and respect the rights detailed in the treaty. As a result, many states - some under immense pressure - have started to act against sexual violence in cyberspace against young girls. For example, the UK is processing The Online Harms Bill that makes the act of sharing deep fake pornography a crime. The act aims to protect girls by placing duties on platforms to put in place processes and systems to detect deepfakes and by making sure the views of women and girls are integrated in the process (Department for Science, Innovation & Technology, 2024). Many sites have put alerts for deep fake videos, but it is still up to the victims or those around the victim to report the image/video for it to be taken down. These work as weak prescriptions to the symptoms of deep fake pornography – that is they work to counter deep fakes after they are produced. What is needed is policy, reformation, and action to tackle the causes of deep fake pornography targeted at girls.
Additionally, cyberspace is not confined to one state, one person or one business. As detailed in resolution 38/5 of the Human Rights Councils Report on preventing and responding to violence against women and girls in digital contexts (United Nations General Assembly, 2018) there is an urgent need for active cooperation between states globally, private actors, judicial authorities and owners of digital platforms to detect, report and investigate the occurrences of such violations. Technology has helped bolster globalisation meaning that people across the world can communicate with each other within seconds, but this also means that deep fake pornography can be shared globally in seconds. Something that can be shared globally needs a global response.
We need to look at increased transparency of AI platforms, legislation, public awareness, education, and AI itself as tools to combat the weaponisation of AI deep fakes against young women and girls.
AI deep fake creation platforms should put in place procedures and systems that increase their transparency, allowing for open sourcing that ensures that what is being created is monitored and regulated. Deep fake creation platforms and social media platforms should also work towards introducing and implementing deep fake detection and pornography technologies. This will help prevent the sharing of deep fake pornography and prevent people from engaging in the creation of such pornography.
The legal landscape around AI still has much work to do. As technology continues to develop we see our legal frameworks lagging behind more and more making room for the weaponisation of tech and cyber developments against girls. Much legislation has tackled gender-based abuses and AI separately when they should be combined. There needs to be more legislation, in-state and globally, that focuses on gender-based violence that has been created by AI.
There needs to be increased public discussion on the causes and impacts of AI and image based sexual violence on the rights of girls in society so that cases of deep fake pornography are not overlooked or sidelined because they are ‘not real.’ Programmes and courses should be made to teach people, governing and enforcement bodies, private actors, and digital platforms on the implications of Deep fake pornography on girls’ rights, how to reduce the creation of such content, how to help inform and enforce legislation in this topic area and stop its spread. Actors should be driven by a desire to comply with and promote the human rights or women and girls globally but also from the host states and organisations duty to properly deal with human rights violations.
However, questions arise whether we will ever be able to get to a point where technology and legislation are at the same speed or whether global collaboration is feasible especially in regard to cyber space and state sovereignty. What we do know is that collaboration across the legal, technological and security sectors are vital in countering the weaponisation of AI deep fakes against girls.
Deep fakes pose a great challenge to society and the bodily integrity of many girls and women (victims and non-victims) worldwide. Girls access to technology, freedom of expression in public forums, experiences of gender equality, and more are impeded upon as a result of the creation and spread of deep fake pornography. The digital world should be a space that facilitates the growth and development of all people, but it is slowly become a space where girls fear becoming the next victim of deep fake pornography. I hope that actors globally open their eyes and ears to the voices of victims and take the needed and coordinated steps to countering AI deep fake pornography to protect the rights of girls from every corner of the world.
*The opinions expressed on this blog are those of the Young Experts and do not necessarily reflect the views or official positions of the Girls Human Rights Hub. The content shared here is intended to provide insights and perspectives on girls human rights and human rights issues, but it is important to recognise that individual opinions may vary.
Bibliography:
Mackenzie, J. and Choi, L. (2024). South Korea: The deepfake crisis engulfing hundreds of schools. [online] BBC News. Available at: https://www.bbc.co.uk/news/articles/cpdlpj9zn9go.
Department for Science, Innovation & Technology (2024). Online Safety Act: Explainer. [online] GOV.UK. Available at: https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer#how-the-act-will-be-enforced.
Ofcom (2024). A deep dive into deepfakes that demean, defraud and disinform. [online] www.ofcom.org.uk. Available at: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/deepfakes-demean-defraud-disinform/.
Saner, E. (2024). Inside the Taylor Swift deepfake scandal: ‘It’s men telling a powerful woman to get back in her box.’ The Guardian. [online] 31 Jan. Available at: https://www.theguardian.com/technology/2024/jan/31/inside-the-taylor-swift-deepfake-scandal-its-men-telling-a-powerful-woman-to-get-back-in-her-box.
Security Hero (2023). 2023 State Of Deepfakes: Realities, Threats, And Impact. [online] Securityhero.io. Available at: https://www.securityhero.io/state-of-deepfakes/#key-findings.
Somers, M. (2020). Deepfakes, explained. [online] MIT Sloan. Available at: https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained.
United Nations General Assembly (1948). Universal Declaration of Human Rights. [online] Refworld. Available at: https://www.refworld.org/legal/resolution/unga/1948/en/11563.
United Nations General Assembly (1979). Convention on the Elimination of All Forms of Discrimination against Women New York, 18 December 1979. [online] OHCHR. Available at: https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-elimination-all-forms-discrimination-against-women.
United Nations General Assembly (1989). Convention on the Rights of the Child. [online] Refworld. Available at: https://www.refworld.org/legal/agreements/unga/1989/en/18815.
United Nations General Assembly (2018). Accelerating efforts to eliminate violence against women and girls: preventing and responding to violence against women and girls in digital contexts. [online] Available at: https://ap.ohchr.org/documents/E/HRC/d_res_dec/A_HRC_38_L6.docx.