Blog Post

August 17, 2023

Generative AI in Government: How to avoid a widening 'digital divide'

The private sector, renowned for its innovation and agility, often surpasses the government when it comes to the pace of digital innovation and adoption. This has bred a concept known as the private-public ‘digital divide,’ whereby governments fall behind private sector actors in providing their ‘users’ - i.e. citizens - with the seamless digital experiences they have learned to expect from private companies. The advent of generative AI threatens to exacerbate this divide.

While generative AI has the potential to provide tremendous value in both the public and private sectors, a lack of clear rules and standards regarding its permissible application will likely privilege early adoption in the private sector, as governments are expected to govern in accordance with certain principles that do not apply to private sector actors.

Below we discuss two key blockers to AI innovation in the public sector, and outline some early mitigation strategies for government actors to consider when thinking through how they might leverage generative AI for the public benefit. 

What is generative AI?

Generative AI refers to a class of algorithms that are capable of generating new content, such as text, images, videos, music, and speech, and is distinguished from other types of artificial intelligence in that it requires extremely large volumes of data and computing power to generate outputs. 

General-purpose AI models are trained by scraping, analysing, and processing large volumes of publicly available data from the internet. To generate responses, large language models rely on statistical models to make probabilistic guesses regarding which bits of text belong together in a sequence. 

Generative AI  applications can be understood to fall into four broad categories:

  • Chatbots: Systems that simulate human conversation, often in question-and-answer format
  • Examples of leading products/developers: ChatGPT (OpenAI), Bard (Google), Claude (Anthropic), NeMo (NVIDIA)
  • Image generators: Systems that generate images based on an input or “prompt”
  • Examples of leading products/developers: DALL-E (OpenAI), Stable Diffusion (Stability AI)
  • Video generators: Systems that generate videos based on an input or “prompt”
  • Examples of leading products/developers: Make-a-Video (Meta), Synthesia, HeyGen
  • Voice clones: Systems that generate speech and voice sounds
  • Examples of leading products/developers: Resemble AI, Murf, Read Speaker

Potential use cases for GenAI in government

Although the exploration of, experimentation with, and application of generative AI in the public sector remains nascent, it’s clear that if leveraged safely and effectively, this emerging technology could add value to government operations and public service provision in a number of meaningful ways. 

On one end of the scale, generative AI could be leveraged in simple ways to drive efficiency and productivity through the automation of tasks, such as in generating document summaries and tailored messaging content for citizen engagement. On the higher impact end of the scale, these models could be used in more complex ways to drive innovation in policy making and service provision. This could take the form of developing bespoke generative AI tools powered by the data owned and accessed by a specific government function or department, equipping civil servants with the ability to query for insights regarding the status of their service provision, its impact on stakeholders, or explore the potential impact of a newly proposed policy, among others.

What’s holding the public sector back?

Today’s generative AI models carry a variety of risks and are not yet well-regulated in most countries across the globe, including the UK. In this context, governments - given their mandate to serve the public interest in alignment with principles such as fairness and transparency - are more restricted than private companies to act as early adopters of generative AI. 

Below we outline two blockers we consider particularly important to consider with regard to the adoption of generative AI in the public sector. Understanding these blockers and early mitigation strategies can support public sector actors to move beyond a risk-averse approach to generative AI - threatening to widen the digital divide mentioned above - and explore mitigation strategies that could advance safe experimentation and exploration.

Blocker 1: Well-documented GenAI risks can prove uniquely problematic in a public sector context

While developers of GenAI models undertake a variety of voluntary activities to manage and mitigate risks associated with the products they are building, there remain a number of well-documented risks associated with generative AI. Below we outline how several of these common risks prove to be particularly salient in a public sector context. While this list is by no means exhaustive, we think it is useful to frame a few of the most prominent risk areas in relation to how they present unique and often greater barriers within the public sector as compared to the private sector.

One of the most well-documented GenAI risks is around the relatively high possibility of models providing false or misleading information. This risk stems from the fact that generative AI models are making probabilistic guesses based on the data they are trained on, and are therefore not guaranteed to be accurate. Related to this, there is a concurrent risk of perpetuating unfair discrimination and representational or material harm, due to the fact that the input data used to train the models contain human biases that can be reinforced in the output data. While both of these risks are of course a concern for private sector organisations, the risk of false or misleading information and/or perpetuating discrimination and subsequent harm can be uniquely problematic for governments if these models are being used to make decisions which affect the lives of citizens. 

The best - but by no means good - case scenario is that decisions made on bad information end up costing time and money to correct. The worst - and by no means unlikely - case is these ill-informed or problematic decisions go unnoticed and have detrimental consequences to citizens interacting with a public service. And while private actors may be generally more willing to take on the risk of making less than perfect decisions in the short term in service of longer term efficiencies through more sophisticated use of GenAI, it is not difficult to understand why a public sector organisation - which bears a direct responsibility to serve the interests of the public - would be less willing to assume this risk in the name of innovation for the sake of downstream returns. Furthermore, it would only take one high profile case of bad public sector decision making through the use of GenAI to deepen public distrust in the technology and its use in the delivery of public services.

Aside from these topline risks around information accuracy and discrimination,  there are additional - more technical - risks which are particularly salient for the public sector. One is around the risk of private data leaks, where private data may be present in the training corpus and from the advanced inference capabilities of language models. Although private sector actors are also held to legal and regulatory standards of data privacy, again the public sector’s inherent responsibility to serve the public can understandably result in a greater concern over the potential for private data being leaked through these models - especially when you consider the particularly sensitive areas which public services often touch (i.e. healthcare, employment, legal & justice system etc.).

Lastly, there is a relatively tangential - yet still significant - concern around the adverse environmental impact of building and operating LLMs. Similar to the more frequently discussed environmental impact of cryptocurrency mining, training and operating LLMs - including those underlying GenAI - requires large volumes of energy (leading to carbon emissions), and freshwater (leading to resource depletion) to cool the data centres (hardware) where computations are run. The tendency and willingness of actors in the private sector to prioritise innovation and efficiency over environmental concerns will not come as a revelation to most. This leaves another opportunity for the public sector to ‘fall behind’ in scaling the internal use of GenAI as governments - some more than others - often hold themselves to higher standards of climate responsibility than private businesses (or are at least interested in posturing to that effect).

Blocker 2: Lack of legal certainty due to an absence of legislation

Currently, the UK does not have an official regulatory framework governing the development and application of AI, including generative AI. While HMG has released several publications outlining its stance and approach to AI, it has avoided developing an overarching AI law in favour of taking a more general  “pro-innovation approach to AI regulation” in order to minimise risks of stifling AI innovation and slowing AI adoption. HMG presents an “adaptable approach” to future proof AI regulation through a context-specific and sector-led approach in line with five principles as set out in the government’s AI White Paper. 

The AI White Paper’s 5 principles are designed to guide and inform the development and use of AI in all sectors of the economy:

  1. Safety, security and robustness - AI systems should function in a robust, secure and safe way throughout the AI lifecycle, with risks being continually identified, assessed and managed.
  2. Appropriate transparency and explainability - AI systems should be appropriately transparent and explainable to allow for outputs from an AI system to be interpretable and understandable. 
  3. Fairness - AI systems should not undermine the legal rights of individuals or organisations, result in discrimination against individuals or create unfair market outcomes
  4. Accountability and governance - Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI lifecycle
  5. Contestability and redress - Where appropriate, users and other impacted third parties should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

These principles, for now, have been introduced on a non-statutory basis and with the expectation that regulators will issue guidance relevant to their sectors on how the principles interact with existing legislation and how best to achieve compliance. The thinking behind this approach is that regulators are best placed to understand the risks in their sectors, and so the new regime would enable them to take a proportionate approach to regulating AI. However, this guidance is non-binding, and while it aligns with some international standards / principles for the use of AI, it’s unclear how enforcement will operate in practice without providing existing sector-specific regulators with the necessary legal powers and clear mandate. Furthermore, this uncertainty presents a risk of inconsistency and contradictory guidance being produced by different regulators, particularly where an AI system falls within the remit of more than one regulator. Finally, the guidance does not provide a clear view of permissible applications of AI, including GenAI in government. While the UK government has provided some brief guidance on GenAI use for civil servants, this is by no means a comprehensive set of strategies, policies and frameworks for deploying GenAI across the public sector.

While this  ‘pro-innovation’ approach is likely to facilitate experimentation and implementation of generative AI in the private sector, the lack of clear standards and rules for permissible generative AI applications - i.e. the lack of legal certainty - is likely to impede experimentation with this new technology in the public sector. Government actors, in the absence of official laws and policies, will likely opt for a more risk-averse approach given the risks associated with generative AI. 

Where can we go from here?

While the blockers we’ve outlined above must be taken seriously, there are ways in which the government can take steps to understand, experiment with, and implement generative AI tools in safe ways rather than completely forego the potential positive value this technology can add to their work. 

First, as AI tools of all kinds continue to mature at a rapid pace, there is reason to expect more specific and enforceable regulatory frameworks to be established in the near future. It will be essential that these frameworks should include specific guidance and policy on use within government to ensure the public sector is not left behind in terms of generative AI innovation and adoption. 

Moreover, while governments can and should take the risks outlined above seriously,  there are efforts public sector actors can take to address some of the bigger risks around misinformation, transparency and bias to empower government teams to engage responsibly with generative AI tools. Below we’ve outlined a few of these mitigation strategies:

  • Monitoring accuracy and bias within models by deploying monitoring software to assess accuracy in real time
  • Making sure the use case is appropriate for the algorithm (e.g. offer an expert-driven risk score using a purpose built framework)
  • Ensure data is investigated thoroughly at the prototype stage, alongside making sure all relevant data sources have been considered. External novel data sources can provide interesting and robust features for the model to make it better
  • Take a human-led approach to model building, and make corrections for certain features. For example, don’t introduce specific data points to a model as a feature if you don’t need to, especially if there is a particular risk of perpetuating existing lines of bias or discrimination (e.g. ethnicity or racial background)
  • Train staff in data ethics and best practices early and consistently. Accreditation is soon becoming a popular thing to do for data teams, so it will pay to get ahead of the curve
  • Use white box algorithms which provide transparency into how a model came to its conclusion and audit models regularly

While this is only a brief and preliminary list, we hope that this can serve as a starting point to inform government actors’ consideration of different methods and approaches to confidently engage with generative AI tools today - ultimately to deliver better outcomes for the citizens they serve.

Partners

No items found.
Photo by the author

Julie Michlal

Senior Associate

Explore more insights

Stay in the loop!

Sign up to our monthly newsletter to get a snapshot of PUBLIC’s impact across the public sector, thought-leadership from our experts, and opportunities to get involved!