02
  • Publications
  • Thoughts
  • Deepfakes – Seeing is believing?

    Does this person exist?

    You would be forgiven for believing this is a real person, however it was produced artificially by the website ‘thispersondoesnotexist.com’. The website uses a ‘Generative Adversarial Network’ (GAN) that trains artificial intelligence to produce realistic pictures of people based on average facial compositions.

    Since 2014, the website has sparked intrigue online and has contributed to growing concerns surrounding ‘Deepfakes’.

    Deepfakes are a technological phenomenon, whereby an individual’s voice and facial imagery are manipulated through a GAN to create the impression that the content is authentic when the individual never actually said or did what they appear to be doing.

    The rise of Deepfakes has contributed to the ‘post-truth’ culture of fake news and fraudulently constructed messages on behalf of the supposed speaker. The dissemination of disinformation has polluted our political narrative and risks disrupting the business community.

    There have been numerous high-profile examples of Deepfakes and similar technology influencing contemporary political narratives.

    According to the Gabonese constitution, if the President is medically unfit to conduct the duties of the office, they must relinquish the post. The current head of state, President Ali Bongo, has been absent from the country for several months, prompting speculation and political machinations.

    In January this year, a video of President Bongo emerged online that bore the hallmarks of a Deepfake: the leader blinked inconsistently, and he appeared to have a rigid and unnatural posture. The Gabonese opposition seized the narrative, claiming that the video was produced by the President’s team and was proof that he was not medically fit to appear in public, or hold office.

    The fallout from the video sparked an ultimately unsuccessful military coup against President Bongo, demonstrating the potential threat to political discourse posed by this technology. As with so many public crises, the major flaw in the President’s response was to leave a communications vacuum that the opposition gladly filled. This vacuum was the result of the government’s apparent lack of answers to the allegations, fuelled further by expert analysis that could not determine whether the video was, in fact, a Deepfake.

    Until recently, politicians and businesses have taken solace in the knowledge that the technology required to produce a convincing Deepfake was not mainstream and only in the hands of a minority of developers and specialist companies. Yet a recent video of the Speaker of the US House of Representatives, though not a Deepfake, goes some way to demonstrating that technological capability may not hold back the tide of disinformation.

    The video was allegedly produced by a far-right fringe group who simply slowed down the video by 25 per cent using publicly available technology to create the slurring impression. The video was produced to discredit Mrs Pelosi by creating the false impression that she was drunk during the press conference. The Pelosi video received over 2.5 million views on Facebook and illustrates that convincing disinformation can be created using publicly available technology.

    The question remains: what can prominent individuals and businesses do to mitigate the risk posed by Deepfakes? While a particularly visual and somewhat frightening development, and despite generating doomsday headlines such as ‘Deep fake videos threaten the world order’, the technology does necessarily not represent a breaking point with traditional fake news.

    There are a variety of means that businesses can adopt to prevent and mitigate the impact of Deepfakes. The key lesson from the Pelosi and Gabonese examples is that the best defence is to ensure that there is no media vacuum where disinformation can effectively dominate the narrative.

    There are several means to achieve this:

    • Reputational capital. The best defence against any malicious online content is a robust and clear reputation. Businesses should establish a strong reputational base that the target audience trust. This trust will help the target audience identify when a Deepfake portrays a message that goes beyond the company’s usual narrative.
    • Effective monitoring. Identifying Deepfakes early provides you with the best chance of minimising the potential reputational fallout. Once a Deepfake is identified, partner with an agency such as the AI Foundation, Zero Trust or Deeptrace to explore options to have the content removed.
    • Rapid rebuttal. Do not leave a vacuum to be exploited by a Deepfake narrative. Clarify the company’s position as soon as possible by drafting clear reactive statements to issue as and when any media coverage of the Deepfake is published.
    Next Prev
    To Top
    Portland Loading Symbol