Disinformation and deepfakes played a part in the US election. Australia should expect the same
- Written by Renee Barnes, Associate professor of Journalism, University of the Sunshine Coast
As America takes stock after Donald Trump’s re-election to the presidency, it’s worth highlighting the AI-generated fake photos[1], videos[2] and audio[3] shared during the campaign.
A slew[4] of fake videos and images[5] shared by Trump and his supporters purported to show his opponent, Kamala Harris, saying or doing things that did not happen in real life.
Of particular concern are deepfake videos[6], which are edited or generated using artificial intelligence (AI) and depict events that didn’t happen. They may appear to depict real people, but the scenarios are entirely fictitious.
Microsoft warned[7] in late October that:
Russian actors continue to create AI-enhanced deepfake videos about Vice President Harris. In one video, Harris is depicted as allegedly making derogatory comments about former President Donald Trump. In another […] Harris is accused of illegal poaching in Zambia. Finally, another video spreads disinformation about Democratic vice president nominee Tim Walz, gaining more than 5 million views on X in the first 24 hours.
AI has enabled the mass creation of deepfake videos, which poses a threat to democratic processes[8] everywhere.
If left unchallenged, political deep fake videos could have profound impacts on Australian elections.
It’s getting harder to spot a deepfake
Images have stronger persuasive power than text. Unfortunately, Australians are not great at spotting fake videos and images[9].
The prevalence of deepfakes on social media is particularly concerning, given it is getting harder to identify which videos are real and which are not.
Studies suggest people can accurately identify deepfake facial images only 50% of the time[10] (akin to guessing) and deepfake faces in videos just 24.5% of the time[11].
AI-based methods for detection are marginally better than humans. However, these methods become less effective when videos are compressed (which is necessary for social media).
As Australia faces its own election, this technology could profoundly impact perceptions of leaders, policies, and electoral processes.
Without action, Australia could become vulnerable to the same AI-driven political disinformation seen in the US.
Deepfakes and disinformation in Australia
When she was home affairs minister, Clare O'Neil warned[12] technology is undermining the foundations of Australia’s democratic system.
Senator David Pocock demonstrated the risks by creating deepfake videos[13] of both Prime Minister Anthony Albanese and Opposition Leader Peter Dutton.
The technology’s reach extends beyond federal politics. For example, scammers successfully impersonated[14] Sunshine Coast Mayor Rosanna Natoli in a fake video call.
We’ve already seen deepfakes already in Australian political videos, albeit in a humorous context. Think, for example, of the deepfake purporting to show Queensland premier Steven Miles[15], which was released by his political opponents.
While such videos may seem harmless and are clearly fabricated, experts have raised concerns about the potential misuse of deepfake technology in future[16].
As deepfake technology advances, there is growing concern about its ability to distort the truth and manipulate public opinion. Research shows political deepfakes create uncertainty and reduce trust in the news[17].
The risk is amplified by microtargeting[18] – where political actors tailor disinformation to people’s vulnerabilities and political views. This can end up amplifying extreme viewpoints and distort people’s political attitudes[19].
Not everyone can spot a fake
Deepfake content encourages us to make quick judgments[20], based on superficial cues.
Studies suggest some are less susceptible to deepfakes[21], but older Australians are especially at risk. Research[22] shows a 0.6% decrease in deepfake detection accuracy with each year of age.
Younger Australians who spend more time on social media may be better equipped to spot fake imagery or videos[23].
But social media algorithms, which reinforce users’ existing beliefs, can create “echo chambers[24]”.
Research shows people are more likely to share[25] (and less likely to check) political deepfake misinformation when it shows their political enemies in a poor light.
With AI tools struggling to keep pace with video-based disinformation, public awareness may be the most reliable defence.
Deepfakes are more than just a technical issue — they represent a fundamental threat to the principles of free and fair elections.
References
- ^ photos (www.theguardian.com)
- ^ videos (www.bloomberg.com)
- ^ audio (www.nbcnews.com)
- ^ slew (www.washingtonpost.com)
- ^ images (x.com)
- ^ deepfake videos (www.bloomberg.com)
- ^ warned (blogs.microsoft.com)
- ^ threat to democratic processes (www.bendigoadvertiser.com.au)
- ^ videos and images (apo.org.au)
- ^ only 50% of the time (arxiv.org)
- ^ just 24.5% of the time (arxiv.org)
- ^ warned (www.afr.com)
- ^ creating deepfake videos (www.abc.net.au)
- ^ scammers successfully impersonated (www.abc.net.au)
- ^ Queensland premier Steven Miles (www.theguardian.com)
- ^ in future (journals.sagepub.com)
- ^ reduce trust in the news (journals.sagepub.com)
- ^ microtargeting (www.tandfonline.com)
- ^ political attitudes (journals.sagepub.com)
- ^ quick judgments (agipubs.faculty.ucdavis.edu)
- ^ less susceptible to deepfakes (academic.oup.com)
- ^ Research (www.nature.com)
- ^ spot fake imagery or videos (www.sciencedirect.com)
- ^ echo chambers (www.pnas.org)
- ^ more likely to share (www.emerald.com)