Blog Post

August 29, 2023

The role of GenAI in enabling online harms: How do new tools pose unique risks to online safety?

Generative AI (GenAI) has emerged as a powerful tool that is transforming various industries and may push the boundaries of human creativity. As the potential of this technology continues to unfold, it is crucial to also consider the risks that GenAI may pose to online safety. In this blog post, we explore how the use and misuse of GenAI can act as a catalyst for online harms.

For an introduction to GenAI and a broader exploration of the risks and barriers around public sector adoption, check out our recent blog post on Generative AI in Government.

Navigating GenAI’s potential 

The development and deployment of GenAI models has rapidly gained traction in the tech landscape over the past year. The rise in investment for GenAI companies, as evidenced by the growth from $612.8 million in 2022 to a staggering $2.3 billion in funding in 2023, indicates that investors are confident in the technology’s transformative potential. Similarly, tech companies are making their own GenAI tools publicly available, such as Google's conversational AI, Bard, Adobe’s Generative Fill functionalities or OpenAI's image generator, DALL-E

While GenAI’s emerging opportunities are worth exploring, it is essential for us to think beyond the hype and examine the potential adverse effects of this groundbreaking technology to determine how to best intervene - whether through product, policy, or other approaches to safeguard citizens online. This is particularly critical as since the release of OpenAI’s ChatGPT-3 in November 2022, GenAI has been made available to the general public, demanding careful consideration to ensure that individuals are employing it in a safe and responsible manner. How can we measure its impact on users' online experiences? More specifically, how can we ensure that the content produced by GenAI is both safe for its users and not misused to produce or perpetuate harm?

With the aim of deepening our shared understanding of GenAI’s impact on online safety, PUBLIC conducted extensive research during which we conducted desk research and reviewed 40+ literary sources from academia, government, industry and civil society. Here are our key findings

GenAI as a tool for perpetuating online harms

Through our research around risks associated with the use of GenAI,  we observed that the misuse or misappropriation of this technology’s capabilities can impact a wide variety of online harms, ranging from mis-disinformation or fraud to intimate image-based abuse or Child Sexual Abuse Material (CSAM) production. The impact of GenAI on online harms is two-fold, with both malicious actors leveraging the technology to carry out online harms operations and users who may harm themselves through their interactions with GenAI.

While traditional AI is associated with risks such as bias in decision making, GenAI’s content generation abilities add new potential risks by producing material harmful in nature or used for malicious operations. Our research has identified a few characteristics of GenAI that, despite not being inherently harmful, can be leveraged to amplify or create harm.

High-quality Synthetic Media

GenAI tools can produce synthetic media, which is defined as content that has been generated or manipulated to appear to be based on reality, despite being artificial. Although synthetic media existed prior to the  emergence of GenAI (i.e. manually manipulated pictures created using a photo editing software) GenAI enables the average citizen to create synthetic media more easily, and perhaps more convincingly. Currently GenAI is able to produce high-quality outputs that can convincingly fool humans - sometimes even AI detectors themselves -  into thinking the AI-generated content is of human origin. For instance, research shows that humans cannot distinguish AI-generated tweets with tweets written by real Twitter users.  

The ability to blur the distinction between AI-generated content and authentic content production can therefore be leveraged by malicious actors to intentionally deceive targeted individuals or communities. AI-powered voice-cloning technologies are already enabling fraudsters to impersonate relatives or professionals and carry out more convincing scamming operations. Through its ability to produce high-quality synthetic media, GenAI can enable malicious actors to produce harmful realistic material and further perpetuate harms towards populations online.

Rapid output generation at scale

GenAI can be leveraged to automate production of AI generated content at a near-instantaneous speed, allowing users to complete tasks at a much faster rate and with increased efficiency. This ability can encourage malicious actors to propagate harm at a large scale and with minimum time input, therefore amplifying the spread and scope of online harms. Such capabilities are particularly harmful in scaling fraudulent operations, mis- and disinformation or harassment against targeted individuals. 

Recently, research by NewsGuard has uncovered over 380 unreliable AI-generated news websites that publish AI-written news in a range of languages and themes. While some authors of these websites seemingly leveraged GenAI’s ability to automate content production at pace to generate revenue from programmatic advertising, others used automated AI-generation processes to knowingly propagate harm. The research also identified state-funded disinformation websites run by AI-text generators to back false claims. Through its ability to generate content fast and at scale, GenAI can enable malicious actors to increase the scope of their harm with limited time input, particularly  for mis- and disinformation operations.

Low technical barriers to entry

GenAI models enable malicious actors with limited technical expertise to engage in harmful online activities. First, GenAI powered conversational agents can enable more and more actors to educate themselves in how to better carry out online harm operations. Voicing this concern, Europol outlines that while information provided by such agents is already “freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to understand and subsequently carry out various types of crime”. Secondly, GenAI lowers technical barriers to producing both high-quality and scaled synthetic content. From simple prompts, users can produce complex content such as voice cloning or deep fake imagery and scale their abuse operations, for example by using ChatGPT to generate fake social media content to promote a fraudulent investment offer. 

Replicated versions of the now offline website DeepNude can generate synthetic non-consensual intimate imagery without requiring for its users to have any knowledge of coding or of altering images manually.  In addition, in the US, the FBI has issued a public service announcement warning of the rise of reports from ‘victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content’ and linking it to recent AI-enabled technology advancements that lower accessibility barriers to producing harmful content. The low technical barrier to entry enables a wide range of malicious actors to leverage GenAI models to generate and spread types of online harms, especially in conducting identity fraud, scamming, synthetic intimate image-based abuse, and large-scale influence operations which would have otherwise required refined technical knowledge. 

It is important to note that the above characteristics are a few select of many we identified during our research. We anticipate the landscape of GenAI’s potential misuse for online harm will continue to expand beyond the characteristics outlined above. As such, we will continue to reflect on these use cases and their prevalence overtime, and urge our readers to do so as well.

Unintended harmful consequences of using GenAI

Even when using GenAI without ill-intent, vulnerabilities within this technology can produce harmful unintended outputs. 

  • GenAI, like AI generally,  can produce biassed outputs, which can potentially be harmful or discriminatory against certain communities. This is due to human biases present in the data used to train an AI model which in turn are reflected in the AI’s outputs. If biases in the model are unchecked, biassed outputs can be generated, potentially enforcing harmful and discriminatory views. For example, reports have found that open-source AI image generators perpetuate harmful stereotypes by linking criminality to skin colour or leadership roles to gender
  • Hallucinations - which refers to invented or fake outputs from AI models - occur when the AI system cannot correctly interpret the data or prompt it received but still produces an output incoherent with what humans know to be true. This means that if a user prompts a GenAI model to provide analysis or factual information, and the output is completely or in part hallucinated, the model can unintentionally mislead or misinform the user.  Hallucinations can take several forms, such as quoting inexistent articles, inaccurate medical explanations, or completely fabricated events (i.e by stating who holds the world record for crossing the English Channel entirely on foot). Hallucination is a GenAI-specific risk which calls for a close monitoring of AI outputs, without which misinformation can be unintentionally spread. 

Due to risks in bias and hallucinations both users and developers must pay attention to the quality and reliability of AI-generated content before sharing it or using it to inform. Additionally, it is worth noting that users’ interactions with GenAI can sometimes be unclear, causing the AI to misunderstand queries and give inaccurate or unreliable answers. This means that alongside working towards a  safe development and deployment of GenAI models, user awareness is key to enhance the accuracy and effectiveness of the generated outputs. 

Conclusion

When we prompted ChatGPT to explain what online safety harms could arise with the use of GenAI, it responded with concerns around content generation, including “deepfake generation, enabling fabricated content that deceives and defames individuals…and phishing” as well as specific harm types, including “propaganda…cyberbullying”. Our research has found that this is just a small part of the picture. 

At PUBLIC, we are building an evidence base around GenAI-enabled threats to ultimately inform decision-making and opportunities in targeted interventions for both policymakers and safety tech providers. Our research demonstrated how GenAI can act as a catalyst for online harms, showing the importance of acknowledging that this technology’s potential - both for helping society and for harming it - is still unfolding. We encourage interested stakeholders in the Trust and Safety ecosystem to engage with us in developing a coherent understanding of GenAI-enabled threats to help government, regulators, and industry stakeholders navigate this rapidly changing landscape in both positive usage and mitigating harms. 

If you would like to learn more about this issue, or are interested in assessing GenAI risks to online safety, please reach out to us at maya@public.io.

Partners

No items found.
Photo by the author

Maya Daver-Massion

Former Team Member

Photo by the author

Jess Taylor

Former Team Member

Explore more insights

Stay in the loop!

Sign up to our monthly newsletter to get a snapshot of PUBLIC’s impact across the public sector, thought-leadership from our experts, and opportunities to get involved!