The Online Harms White Paper set out the UK government’s ambition to make the UK the safest place in the world to go online, and the best place to grow and start a digital business. DCMS Safety Tech Sectoral Analysis has further identified that there are at least 70 dedicated Safety Tech providers, working to tackle online harms, in a rapidly-growing UK marketplace.
In support of these policy priorities, DCMS launched the Online Safety Data Initiative (OSDI) to test methodologies to facilitate better access to higher quality data to support the development of technology to identify and remove harmful and illegal content from the internet. The initiative aims to design and prototype new approaches which ensure trusted parties can securely and ethically access the online harms data they need to develop new safety tech solutions.
Leading the Discovery Phase, PUBLIC conducted mixed-methods user research to identify and segment barriers to data sharing among Safety Tech companies. Based on over 80+ interviews with innovators, policymakers, social media platforms, and academic experts, we co-developed a series of technical interventions to improve access to data.
PUBLIC’s work, currently being delivered at Alpha Phase, is already helping to tackle barriers to sharing sensitive online harms data for the purpose of developing cutting-edge solutions.
To drive a standardised approach to describing and labelling online harms for safety tech firms, clients, and the future regulator, PUBLIC is building and testing a taxonomy of online harms. This is a natural initial step in developing a Safety Tech regulatory and standards ecosystem. Working closely with Safety Tech firms, the high-level Online Harms Taxonomy and a proof of concept Suicide and Self-Harm Taxonomy are expected to improve adoption of common data labelling and harm type definitions across the sector.
PUBLIC has developed a product evaluation standards framework based on extensive user research with the Safety Tech community. As a proof of concept, PUBLIC and Faculty are implementing a technical test suite for Safety Tech firms to put their AI models through their paces against a test dataset, labelled against the team’s Online Harms Taxonomy. Initially, the test suite will evaluate Safety Tech products that monitor unsafe suicide and self-harm content, with significant scope to expand to other harm types and products.