The topic of fake news has been a mainstay in the media in recent years but, despite being coined at the same time, ‘deepfakes’ are only now beginning to gather interest. The phrase went viral back in May when a doctored video appeared to show US politician Nancy Pelosi drunk in a TV interview. Similar videos followed quickly, and they were even more impressive. In one, comedian Bill Hader impersonated Arnold Schwarzenegger on a talk show, while another presented an unknown actor pretending to be Mark Zuckerberg and defaming himself.
For the uninitiated, the term deepfake refers to artificial intelligence (AI)-powered technology that synthesises imagery and voices to present something that didn’t occur, usually in video form – imagine a convincing video of yourself that has been edited so someone adds a voiceover for something you didn’t say. It’s not dramatic to say the response from the media and the public since the first Pelosi video was published has been a mix of A) surprise, at how advanced the technology is, B) shock at how convincing the videos are, and C) fear and outrage at how deepfakes could be weaponised.
Ordinarily, as with most new tech, there’d be a range of media coverage outlining the pros and cons of the technology, the risks and the opportunities, and the short and long-term applications. Not this time. At a point in history where misinformation is a major threat to democracy, businesses and individuals, the media is rightly bringing the concerns around deepfake technology to the public’s attention.
Seeing is believing
The threats of deepfakes are wide ranging. The share price of a business can be destroyed in a day by a competitor. A government could be unsettled by a foreign player. Then there’s the stress and pressure of individuals in the media spotlight, which can be crushing. Imagine waking up to find a doctored video of you has gone viral, you’re being showered with vile abuse and you’re almost powerless in disproving its authenticity. It’s a frightening thought, but in today’s world it’s a very real possibility, especially for those in the public eye. While you may be able to handle it, those more vulnerable may not – it’s a matter of time until violence occurs somewhere in the world because of an incident created by a deepfake.
It’s one thing for the media to educate the public on deepfakes, but it’s another altogether when journalists and public relations professionals themselves are duped. This is something all of us in communications should be wary of.
Because of the inherent risks outlined above, the impact of deepfakes on how the media works could be significant. For journalists, publishing a breaking story based on a video that is later proved false can damage their reputation, erode the trust of readers in the newspaper or magazine they write for, and even lead to libel lawsuits. As such, we could see a change in the way journalists receive, write and publish stories inspired by video content, as they should first establish the source and legitimacy of it. But, of course, this is to assume best practice is followed – and in the fast-paced environment of a news room, this can sometimes be overlooked. The hope is that if journalists don’t have time to check authenticity, they’ll simply consider it fake news and spike the story in favour of pursuing something else.
Public relations professionals face a challenge too. A core pillar of our day-to-day activity is news hijacking, whereby we jump on news by pitching responses and additional information on behalf of the companies we represent. Doing this on a regular basis on video content that turns out to be a deepfake means we’re investing resources in opportunities that won’t lead to coverage, especially if journalists do follow best practice and spike the story. But also, it’ll likely damage our relationships with journalists, who’ll become frustrated at being pitched comments for stories they can’t run. To protect our companies, and ensure good relationships with the media, PR teams would be wise to help journalists establish if the content is legitimate.
Then, of course, there’s the crisis communications element. If a deepfake is released that shows a CEO, for example, acting in a way that damages his or her company’s reputation, public relations becomes critical. In a hypothetical example of a doctored video depicting racist abuse by a customer service representative in a big brand’s shop, it will be the responsibility of the PR team to communicate that the incident did not occur and is instead a deepfake. Of course, by the time staff internally have established that it’s fake, the video will have gone viral, making the situation even more challenging.
An even more concerning consideration is the labelling of a legitimate video as a deepfake. The President of the United States has proven in recent years that you can easily undermine truth with a fake news rhetoric. This act, whereby wrongdoing is dismissed as fake news, has consequences for businesses, but also for social justice and prosecutors. Unless a solution to deepfakes is found, perpetrators of crimes could be found innocent due to video evidence no longer being trusted.
AI: The poison and the vaccine
Humans have been editing images and videos for years – why is it only now that it’s a problem? The rapid development of AI-powered tools has moved video doctoring from a primitive exercise often undertaken for fun, to a calculated misinformation tactic for bad actors.
There is hope, however. While AI may well be a key ingredient in the poison that is deepfakes, it is realistically the only vaccine. We’ll soon reach a point where the human eye can’t detect what is and isn’t real in a video. At this point, we’ll rely on AI-powered tools that counter deepfakes. Fortunately, they’re already in development; so serious is the threat of misinformation that the US Defence Department is leading the way on anti-deepfake programmes.
In the meantime, given that there is little legal precedent that creators of deepfakes can be criminally charged for distributing them, the responsibility falls on publishers and viewers. The former, in the case of newspapers, can use their platforms to undermine these videos, by releasing stories revealing their authenticity. Social media networks must be vigilant in removing them from their sites as quickly as possible or, at the very least, flag when videos may contain doctored elements. Unfortunately, as proven by Facebook and Instagram recently, social media companies are opting to turn a blind eye. Meanwhile, viewers must be cautious with what they share with their networks.
For PR teams, all we can do is undertake proper due diligence and act in an ethical manner. If there’s even a hint of a video being doctored, we must not base news hijacking activities on it. After all, the companies we represent will find themselves in a worse position by adding to the misinformation, wishing that they’d steered clear in the first place.