Skip to content

AI, deepfakes and elections: where’s the harm?

March 22, 2024

Much is being written and said about the potentially malign influence of artificial intelligence on elections, some of it very alarming. The main focus is on deepfakes – artificially generated pictures, audio and video – that are exactly what they say on the label: fakes and forgeries. Just how concerned should we be?

Some experts are keen to play down the threat, including Ciaran Martin, founding chief executive of the UK National Cyber Security Centre, part of GCHQ and is currently professor of practice in the management of public organisations at the Blavatnik School of Government, who I heard speak at a recent Association of European Journalists briefing.

He tried to be reassuring.

“A lot of this is not completely new. It is just that the technology has made it easier and it enables to spread faster … electoral interference in its current form has been going on for some time. It pre-dates AI by a hundred years.”

He cited the example of the notorious Zinoviev Letter which was published by the Daily Mail during the 1924 General Election campaign. It purported to show proof of connections between the Soviet Union and the British Communist Party and, by implication, the Labour government. It was later proved to be a forgery but its publication just four days before the election contributed to the Labour government’s substantial defeat.

He examined various recent examples from the UK, Europe and United States of attempts to interfere in elections, highlighting where several interventions, in his view, had failed.

The deepfake audio of the Mayor of London Sadiq Khan that was circulated around Remembrance Sunday last year was one example. This did not get much traction, argued Martin, primarily because his principal political opponents – The Conservative Party – and all responsible media outlets were not taken in by it and did not circulate it. That is true but perhaps a trifle naïve in assuming that significantly diminishes the risk of harm from deepfakes.

Given all political parties and their leading figures are vulnerable to this sort of attack there is a mutual interest in being very cautious about latching on to anything that looks or sounds suspect and exploiting it. The mainstream media is also learning quickly to greater care in verifying anything that does not come from an identifiable reputable source. That leaves plenty of scope for concern.

What about when social media is the dominant news outlet?
The growing proportion of people who rely on social media channels for their news is alarming. Several recent surveys have identified TikTok as the top source of news for Gen Z. In America it is increasingly popular as a news source for all generations, perhaps going some way to explaining the insanity of the US Presidential election race.

This brings us right back to the role of social media and the failure of all the major platforms to take responsibility for what is published on their platforms. They are publishers but have escaped nearly all the responsibilities that go with that role. It is fashionable to deride the “MSM” (mainstream media) but it is one of the key pillars of free, democratic societies. It is flawed. It is diverse. It does not always get things right or go about its role in ways that people find acceptable. But it is subject to the law and that law makes it ultimately responsible for what it publishes. Why are social media platforms not similarly regulated?

Recent moves such as the UK’s Online Safety Bill go some way but fall short of creating a mechanism powerful enough to stop the harm from deepfakes and other malicious online content, which goes far beyond the political sphere.

Of course, it would undermine their business model if they were treated as publishers as they would suddenly have to employ vast numbers of people to scrutinise and check content. Pre-publication verification would cripple them. That does not mean we should not be moving in that direction. Protecting society is far more important than protecting the business interests of the tech and social media giants.

Alongside that the MSM has to raise its game. Initiatives such as BBC Verify are a big step in the right direction. They bring to the fore what most media outlets have been doing for a very long time, helping reinforce the message that in this age of AI and social media you cannot always believe what you see and hear.The debate about the place of AI in society has only just started and we must not let its darker applications obscure the good it can do. Neither must we dismiss its potential to be used for nefarious purposes. Take away the outlets and you take away the incentives to use AI to cause harm and undermine our democratic processes.

• The images used in this article were generated using the AI tools in the Adobe Stock media library.

Leave a Comment

Leave a comment