On Thursday 24th September, PUBLIC hosted a panel at the New Statesman Labour Conference which centred around a theme of how technology can help to mitigate the threat of disinformation in our digital world. Chaired by New Statesman Tech Editor Oscar Williams, the panel was joined by Darren Jones MP, Chair of the Business, Energy and Industrial Strategy Select Committee, Nina Schick, broadcaster and author, Chief Executive at the Centre for Countering Digital Hate Imran Ahmed, and Andy Richardson, PUBLIC’s CTO. You can watch the full panel below.

To give some context to the audience, MP Darren Jones opened the panel with a statement about the Online Harms Bill which has been an ongoing work-in-progress since 2019 but has yet to reach Parliament due to what Darren described as a “movable feast” of issues ranging from consumer use of platforms all the way through to national security issues. Darren wanted to reassure listeners that when the Bill does come to Parliament politicians will be debating “what truth means, what harm means and how we define those issues”. 

Next up was broadcaster and author Nina Schick, who has a background in information warfare, geopolitics and disinformation. Nina spoke about the premise of her book ‘Deep Fakes and the Infocalypse’ which argues that “our entire information ecosystem has become increasingly dangerous and corrupt”. Nina also made clear that technology is not to blame for disinformation as it only acts as an amplifier of human intentions. What Nina really wanted to explore is how these issues will become more potent due to recent advances in AI and specifically ‘fake media’ generated by AI in the form of video which will soon be accessible to anyone. 

Chief Executive at the Centre for Countering Digital Hate, Imran Ahmed stated his particular interest in the way in which digital spaces are used by hate actors and disinformation actors and how the epistemic anxiety around not knowing who to trust will significantly impact our futures. 

Last to outline his position was PUBLIC’s CTO Andy Richardson, who identified the role of capitalism in the way people use technology, stating “People will utilise technology if it saves them money, makes them money or puts their competitors out of business”. When thinking about what technology can do to stop the spread of disinformation Andy stressed that these issues are borderless and ubiquitous and that the jurisdictional and policy landscape is far more difficult here. Andy argued that capitalist incentives could be the only solution to encourage actors to prevent the proliferation of disinformation. 

What has the infodemic of disinformation around coronavirus taught us?

The pandemic has certainly accelerated the use of misinformation and disinformation, and many different actors have begun to get involved in spreading ‘fake news’. Addressing the geopolitics of information warfare around Covid-19, Nina spoke to the panel about examples of China and Russia being able to “infiltrate western information spaces” and cites the example of China spreading the ‘genesis myth’, the idea that Covid-19 didn’t originate in China but instead was planted by the CIA. She also mentioned that many other countries have started to get involved such as North Korea and Iran, not to mention individual actors such as David Icke who have used their online platforms to spread conspiracy theories. Nina stated “Bad information is dangerous, but in the case of Covid it literally kills”

The chair of the panel Oscar Williams identified the three main actors involved when speaking about disinformation perpetrators: foreign organisations infiltrating information spaces, the general public who might not be particularly well informed, and politicians using spin to bolster their reputation or political agenda. Oscar asked politician Darren Jones – “Is it important to distinguish between campaigns generated by foreign actors and those spread by political broadcasts?“. Darren replied “You have to be solutions based and practical” and argued that we need to find points of agreement across borders and jurisdictions and that it’s the governments and regulators job to step up to tackle these issues. 

Has there been enough research into the consequences of disinformation?

While it’s very hard to quantify the impact disinformation has had on society, Andy Richardson argued that in regards to Covid-19, “The population has never been so well informed about a single issue” and cites a recent study by Reuters Institute and Oxford University collating all of the information presented to the public surrounding the pandemic and stating that people, on the whole, say they’d rather hear from scientists that politicians. Andy argued this could be due to an erosion of the general public’s trust, linking to the notion that we now live in a ‘disinformation age’. 

What can we do to stem the tide to disinformation and how do we restore trust in organisations?

Imran Ahmed premised his answer by highlighting the “epistemic anxiety that people feel over not knowing who to trust”. He went on to say that what they’ve found works at the Centre for Countering Digital Hate is “brute force de-platforming”. For example, Imran argued that when faced with the case of David Icke the CCDH put out a report and within 24 hours Youtube and Facebook had been able to de-platform him. Imran told the panel that although some people argue that de-platforming individuals will only mean they pop up somewhere else, in this case it means they are no longer benefitted by an algorithm which prioritises this controversial content. In order to tackle disinformation, Imran claimed that “We as the public will need to re-evaluate what we understand information to be”. 

Are there particular technological solutions that are effective in tackling disinformation? 

Nina Schick presented the panel with a two-pronged approach to tackling disinformation. The first part of the solution she argues, is diagnosing the problem correctly and addressing this issue as one of the biggest challenges facing society going forward. Although disinformation is as old as society and civilisation, it has never been as potent as it is today and will become more so as AI and synthetic media permeate the information ecosystem. Nina argued that we will only be able to tackle this if we “build society-wide resilience”.

The next step to tackling disinformation she argued, is utilising both technical solutions as well as human solutions such as regulation, education, digital literacy and policy. Nina stressed that technical solutions will only work if there is a will to adopt them. She used the example of Deep Fakes which she argued will soon become so convincing that humans won’t be able to detect them and we will have to rely on AI tools to build solutions for detection. Actors with an interest in purveying that their media content is authentic will need to invest in technology to demonstrate this, such as a watermark showing that this piece of content is real, this would give consumers a much needed layer of protection, she argued. Nina told the panelists that the next big question is where that ‘political will’ will come from. The best way forward, she argued, is for consumers to realise how dangerous disinformation is so that the impetus for change comes from a “more grassroots initiative”. 

Andy Richardson echoed Nina’s point and stated “Politicians need to lead from the front and be clear on what is expected from the population”. He gave the example of teaching cognitive bias starting in primary school as a way society needs to educate itself on these issues from a young age. 

Where do we go next? 

Closing the panel, Imran Ahmed told the panel that “Consequences are how you socialise society”. He argued that by bringing in consequences for spreading disinformation, we can start to tackle these issues. Imran ended the session with a strong actionable statement and concluded “we are going to need to legislate to create economic, political and reputational consequences for the use of disinformation and hate”. 

You can watch the full panel discussion below.