AFTER MSPs voted for John Swinney to become Scotland’s seventh First Minister on Tuesday, Sky News broadcast the new SNP leader’s speech live.

However, hundreds of thousands of people on social media instead saw a clip of the broadcast manipulated by artificial intelligence (AI).

It looked like Swinney, it sounded like Swinney, and it all looked to have come from a legitimate Sky broadcast. But it was completely fake.

The video – which was widely shared by figures such as GB News contributor Lee Harris and Spectator columnist Gareth Roberts – illustrated the point made by regius professor of computer science Wendy Hall one week before.

Hall, who co-chaired the UK Government’s 2017 AI review, argued that “the cat is out of the bag” – it is too late to prevent this year’s General Election from the risks of AI-generated disinformation.

To learn more, the Sunday National spoke to two experts in the field - Dr Keegan McBride, a lecturer in AI, government, and policy at the University of Oxford’s Oxford Internet Institute, and Dr Nisreen Ameen, a senior lecturer at Royal Holloway, University of London and vice president of the UK Academy of Information Systems.

Here’s what they said about AI disinformation, its potential impact on democracy, and how to spot it for yourself.

Would the fake video of John Swinney have been easy to make?

The National: Newly elected leader of the Scottish National Party John Swinney in the Garden Lobby at the

THE AI-generated video of Swinney seems to have come from a random anonymous account on Twitter/X. It was first posted in the comments of another thread before being amplified in new posts from right-wing figures.

It doesn’t seem to have cost anyone a huge amount of effort, or been a large project. So, would it have been easy to make? The answer from both experts was the same - yes.

“There are many different ways to do it, some more complicated than others,” McBride said.

“But the fact of the matter is, if you want to do this stuff, it's not hard. If you're technically competent, you can figure it out fairly quickly.”

And if someone is aiming to spread an AI deepfake using audio only – as happened with a faked recording of Keir Starmer which was published on the day Labour’s conference began last October – it would be even easier.

Could AI deepfakes have a significant impact on democratic elections?

WHEN it comes to whether AI deepfakes will have a major impact on elections in the UK or around the world, the jury is out.

McBride argued that the ease with which AI deepfakes can be created does not necessarily mean they will easily have an impact.

The Oxford University expert pointed to research from his colleagues which he said had found that “when it comes to misinformation and disinformation, it's not really the supply that matters but the demand”.

He went on: “So, it doesn't matter if you can make 10 million AI-generated videos, you need an audience for them.”

The National: An AI deepfake of Kyiv mayor Vitali Klitschko made headlines in 2022 after duping European leadersAn AI deepfake of Kyiv mayor Vitali Klitschko made headlines in 2022 after duping European leaders

McBride argued that AI videos presented much more of a risk when specific people are being specifically targeted.

“We've seen world leaders tricked with deep fakes,” he said. “Whether they were talking to a fake version of the head of the African Union, like the Prime Minister of Estonia was tricked, or [Mayor of Kyiv Vitali] Klitschko, there was a deepfake of him that was doing calls throughout Europe."

“I think that's a much more dangerous thing,” McBride added. “I think there's real potential for it to have an impact, but I don't think it's going to erode elections or our democratic processes.”

However, Ameen disagreed with that analysis. She argued that “AI algorithms can analyse user data, identify certain individuals, and then tailor the information or misinformation campaigns based on their biases”.

On a large scale, she warned, this could have a huge impact.

READ MORE: ‘Deeper understanding’ of AI risks is needed before new laws – minister

“I do think that it's really important to make sure that we educate the public about the dangers of AI,” Ameen said. “If not managed effectively … I do think it has and it can lead to really dangerous situations.

“We have billions of people eligible to vote in elections [in 2024]. So definitely there is an important message for the public to critically evaluate content before accepting that it's actually true.

“There are all sorts of things that can go wrong in terms of voter manipulation, for example, undermining trust, in terms of spending money as well, and of course we run the danger of also skewing results.”

How can you tell a video is an AI deepfake?

BOTH Ameen and McBride told the Sunday National that spotting an AI fake is getting more and more difficult.

As technology advances at pace, common tips for spotting AI-generated content are becoming less useful. For example, looking at people’s hands used to be a good idea, as AI was notoriously bad at generating fingers accurately.

However, the two experts shared some key tips on how to spot an AI fake:

  • Keep an eye on things in the background of the video. Are the physics slightly off? Are there objects that don’t make sense, or that don’t align as they would in the real world?

  • Look at the borders between things. Do they blend unnaturally?

  • Pay close attention to any text, which could appear on things like name plates, shop signs, or water bottles. Is it actual words, or is it a kind of gibberish?

  • Listen to voices speaking. Do they sound almost too perfect? Or do they include the natural umms, errs, and repetitions which are common in usual human speech?

  • Watch a person’s lips and mouth as they speak. Do the sounds align with the visuals?

  • Look at facial expressions and movements closely. Does anything seem unusual or unnatural?

  • Keep an eye on lighting in the video. Is it inconsistent in angle and shading? Are there other “visual artefacts” you can spot?

  • Verify sources. Make sure the video is coming from a news outlet or source you can trust before taking anything at face value.