Blog Post

May 14, 2024

Regulatory Innovation in the face of AI: Insights from a cross-regulatory discussion on innovation, compliance and growth

Regulators are increasingly under pressure from the Government to drive innovation, investment and growth. This is particularly salient when it comes to the UK Government’s ‘pro-innovation’ approach to regulating AI, where a key focus is ensuring that any mandatory measures do not dampen innovation and competition. However, this presents a challenge for regulators. In the face of market disruption and changing business models driven by rapid advancements in AI, how can UK regulators balance innovation and growth with compliance and consumer protection?

Based on published responses to DSIT’s white paper, UK regulators understand this need to adapt and many are considering new approaches, tools and resources to deliver effectively. Yet often efforts are duplicated between regulators and across different sectors, despite huge opportunity for greater cross-regulatory communication and collaboration - particularly with many facing similar challenges around AI.

In an effort to support cross-regulatory collaboration and innovation, PUBLIC recently convened a closed roundtable in partnership with key regulatory leaders and experts from Ofcom, the Health and Safety Executive (HSE), the Equality and Human Rights Commission (EHRC) and others. 

The discussion focused on an overarching question:

In response to market disruption from technologies like AI, how can regulators both encourage innovation to drive competition, sustainable growth and compliance, while also leveraging innovative tools and processes themselves to deliver effectively at pace?

Key takeaways from the discussion

  1. Government needs to enable better cross-regulatory collaboration

Throughout the discussion, participants communicated a shared appetite for more Government-led initiatives supporting regulators to collaborate with one another. With regulators often operating as siloed entities without sufficient coordination from Government, the regulatory landscape is dotted with cases of duplicated efforts and inefficient resource allocation. Across the group, a core pain point was the fragmented approach to navigating market and regulatory disruption from AI. 

In the context of AI regulation, regulators face key capability gaps in access to market intelligence and data, timely horizon scanning insights, technical expertise and data science skills to enable AI regulatory preparedness and joined-up implementation. There is a clear need for targeted pooling and sharing of capabilities to ensure preparedness for AI regulation. Where resources can be limited, establishing a deployable pool of technical resources could especially benefit regulators seeking to experiment with new ideas. 

In addition, participants raised a shared problem of ‘not knowing what we don’t know.’ The lack of central coordination often leaves regulators in the dark about what cooperation, information sharing and joint capabilities they might be missing out on. Standing up more Government-led collaboration initiatives would go a long way in closing this awareness gap and providing transparent, accessible routes for regulators to work together more efficiently.

In terms of what a collaborative regulatory approach could look like in practice, the Digital Regulation Cooperation Forum (DRCF) offers one useful model. That said, more formal and centrally-funded collaboration models are needed which support a larger set of regulators across multiple sectors.

In practical terms, sponsoring departments across Government (e.g. DSIT, DBT) can play a critical role in helping regulators join the dots, particularly in the face of rapidly evolving markets around AI. Central to this approach is cross-regulatory cooperation, enhanced tooling, industry dialogue, and strategic recruitment. By streamlining regulatory efforts, optimising resources, and bridging existing capability gaps, this collective effort empowers regulators of all sizes to anticipate AI impacts and ensure adaptability of regulatory frameworks.

  1. Regulators need practical standards and risk-based prioritisation for AI implementation 

While most participants welcomed the Government’s AI principles as flexible, uncontroversial and helpful guiding ideals, they raised questions about how this might translate into practical application and enforcement. Participants stated a core need for clear, concrete standards and risk-based frameworks to effectively implement AI approaches. The recent announcement of a new £10 million package by DSIT to boost regulators’ AI capabilities sparked discussions on research priorities and practical tools for effective regulation. While this financial support is welcomed and important, participants acknowledged that the current funds might not be sufficient to address all regulatory needs adequately and with the flexibility required to uphold product safety. Participants unanimously agreed there was a necessity for central coordination of AI standards, approaches to market categorisation and prioritisation to anticipate and address emerging risks effectively. 

Regulators face challenges in characterising the risks and harms from AI in the context of potentially uncertain regulatory remits from the existing blurred lines between virtual and physical products and services. Given the absence of common standards and assurance mechanisms for AI, participants encouraged a holistic approach to better understand emerging challenges through networks and institutions.

  1. From sandboxes and labs to upskilling and funds, there are a range of tools that offer valuable support to regulators

While AI presents important challenges for regulators, it also offers new opportunities to develop innovative regulatory ways of working and adapt existing statutory instruments and levers. During the discussion, participants explored an array of tools available to both promote compliance and accelerate market innovation responsibly. Participants shared their innovation approaches and existing tools considering their application within the changing regulatory environment.

Participants discussed the importance of upskilling and innovation learning in order to deliver on the UK Government’s expectations around AI regulation. Despite varying risk appetites, regulators expressed a willingness to embrace calculated risks and further their window of tolerance, recognising the potential rewards in driving innovation, growth and competition. This signalled the importance of innovation learning and upskilling initiatives to reduce scepticism and enable cultural transformation to harness the potential of AI innovation. 

Notably, central innovation programmes and funding mechanisms - such as the Regulators’ Pioneers Fund - have proven to act as catalysts for regulatory innovation and collaboration. Examples from the HSE demonstrated how central funding - when effectively utilised - can validate concepts and enhance the adoption of innovative tools in day-to-day operations. Additionally, participants highlighted the value of regulatory sandboxes, dedicated research labs - such as the HSE Science and Research Centre - and technology trials in promoting smarter regulation and market innovation. 

There was strong agreement around the importance of conducting formal evaluations of these programmes - tailored to varying risk appetites and approaches to experimentation between regulatory teams - to both ensure they are effective and foster a strong culture of innovation. PUBLIC’s recently published guidebook on evaluating digital projects - as distinct from non-digital or traditional evaluations - provides practical advice for teams looking to perform such evaluations effectively.

Recommendations

Building on insights from this roundtable, PUBLIC has identified a set of strategic recommendations for both central government and regulators to foster innovation, compliance and growth in evolving regulated markets:

For Regulators
  • Regulators should develop a target operating model (TOM) that promotes agility and innovation to avoid falling behind on AI in their markets
  • Regulators should collaborate on shared trials and experiments, exploiting the DSIT £10 million package to boost regulators’ AI capabilities, as well as the next Regulatory Pioneer Fund
  • Regulators should review their current innovation models and capability maturity to address critical capability gaps and maximise value for money
For Central Government
  • Sponsoring departments - in particular DSIT as it relates to AI - should consider a broader set of pooled resources and centralised capabilities/tools to support regulatory preparedness for AI in a coordinated way across regulators - as opposed to a more fragmented approach led by pockets of individual regulator-led collaboration

To find out more about how PUBLIC supports regulators to navigate evolving digital markets, get in touch with our Director of Strategy & Transformation, Daniel Fitter, at daniel.fitter@public.io.

Key takeaways from a PUBLIC-hosted roundtable discussion with key UK regulatory leaders on driving innovation in rapidly evolving markets.

Partners

No items found.
Photo by the author

Daniel Fitter

Director of Strategy & Transformation

Explore more insights

Stay in the loop!

Sign up to our monthly newsletter to get a snapshot of PUBLIC’s impact across the public sector, thought-leadership from our experts, and opportunities to get involved!