Almost exactly five years ago, we wrote a piece looking at how PRs could be replaced by robots in the future. With the recent news that Microsoft sacked twenty seven writing staff to replace them with AI algorithms, it seems appropriate to look at this prediction again:

..

There’s a growing threat to journalism: robot writers.

A company called Automated Insights has developed a piece of software called WordSmith that generates news stories on topics such as finance and sports, which are published on the likes of Yahoo!, Associated Press and other outlets.

I know what you’re thinking. Surely a machine can’t write as well as a human?

NPR Planet Money (one of my current fave podcasts) recently did an experiment, where it pitched its fastest journalist, Scott Horsley, against WordSmith.

Scott knocked his piece out in an impressive seven minutes. WordSmith took a blistering two minutes.

You might argue that Scott’s piece was superior – it was certainly more colourful – but it raises the question of whether humans are always needed, especially in today’s data and information-hungry media landscape.

The other question is whether the PR industry needs to be worried about software like WordSmith.

Think how ‘PRSmith’ could work.

>PRSmith would scan the web for mentions of a particular brand according to sentiment (these things will get better in the future) and automatically reply.

>PRSmith would recommend responses to emerging threats, price changes, negative reviews and competitor activity and distribute these across digital media channels. The software would learn which responses performed best over time, based on sentiment analysis and impact on sales.

>PRSmith would distribute news to the right journalists (WordSmith or human), including the right information in the right format. PRSmith would never call a journalist up to ask if he/she/it had received the press release.

>PRSmith could respond to journalists’ requests in nano-seconds – without lying, making errors or trying to evade the question.

Of course this is all slightly tongue in cheek. PRSmith doesn’t yet exist and even WordSmith focuses on areas that are more easily automated, likes stats-heavy sports and financial news. But the rise of automation in the workplace will affect every industry – I don’t see why PR and journalism should be any different.

At present, we don’t believe that many more PR or journalism staff are in danger of losing out to robots immediately – there are many ‘human-centric’ jobs that AIs just can’t do. Similarly, most of the ‘AI PR’ tools that we’ve seen have either been analytics support (and therefore embraced by thousands of relieved PRs!) or terrible, clunky things. But we’d never say never…

The alleged threat of robots taking away human jobs is a topic that has been covered many, many times, by countless PR people, media outlets and academics. Nobody is safe from being replaced, according to the critics of artificial intelligence (AI) that are concerned it will lead to job losses. But what about artists? Surely, the creative genius of the next Banksy, Dali or Hockney must be safe?

Not anymore, apparently. An algorithm, dubbed PaintBot, that learns to mimic the unique styles and brushstrokes of any artist, has now been developed. To make matters worse, it takes only 6 hours for it to learn the artist’s style and five minutes to create a piece of artwork. And that’s just the start – eventually the AI will exceed the capability of a human.

Time will tell if AI is accepted as an artist. We’ll likely see initial, first-to-market artwork created by AI selling at a high price, but then plateau when the marketplace becomes saturated. I suspect we’ll also see certain forward-thinking artists embrace PaintBot technology, fusing their own style with AI to create something never seen before.

Whether you’re excited by AI or fear it, its impact on the artworld will be fascinating to observe. Frankly, that’ll be the case for every sector.

Tech wizzes from the University of Tokyo and Keio University have created a robotic system called ‘Fusion’ that gives the wearer an extra head and two arms. It sounds completely wacky and does look a bit peculiar when you see it on a human. Worn like a backpack, the makers have described it as ‘full body surrogacy for collaborative communication.’

The system is operated remotely using a VR headset and controls, with the idea that the operator has the same perspective as the user. The system uses stereo cameras, anthropomorphic arms and hands with motion sensors.

You must watch the video to understand it better:

Weird as it seems, you can instantly understand the types of applications for this technology. As an example for training and corporate learning, a worker can easily and quickly be shown how to assemble an item on a factory floor. Another way the system can be operated is through ‘enforced body guidance’, helping guide the users with hand movement. The developers say that in this mode the system can help people complete tasks they may have difficulty with.

Fusion could be launched in the next three years if it gets enough funding – one to watch.

As a comms person it got me wondering how this could change my working day. It would be mighty handy to be able to simultaneously type up an article, sip my coffee and check my phone for client-related tweets. But then again, I’m not sure my brain could keep up!

Until mid-September 2017, Facebook allowed ad buyers to target users who were interested in, or who had interacted with anti-Semitic content[1]. The categories were created by a self-selecting algorithm, which aggregated data based on user activity on pages and feeds, rather than individuals’ profiles, from each of its two billion active monthly users.

In March 2016, Microsoft launched an experimental Twitter chatbot to learn from other users, get smarter and eventually have conversations[2]. Just a few hours later, @TayandYou was spouting white supremacist, pro-genocidal content and was taken down in short order.

This raises two questions in my mind. Firstly, can machines be accountable for their actions? And second, what should you do as a communications professional if they do misbehave?

Why should we hold machines accountable?

Neural network-style systems are programmed and trained to reach outcomes, within certain parameters, such as not letting high-risk people buy insurance or creating a category for advertisers to target once a topic reaches a certain threshold of interest amongst users. These algorithms are usually very complex, as they have to process a significant amount of information about specific users, users as groups, external conditions and do a lot of calculations as a result. But once they’re trained within a reasonable degree of success, many organisations simply let them run.

The problem, as academics and journalists tell us, is that this learning can sometimes be a ‘black box’ that you can’t see inside.

It’s cheap labour; all you have to pay is the operating bill for the server.

Of course, not all algorithms are commercial – one of our clients, SafeToNet, is in the middle of creating algorithms that can detect harmful content online and take appropriate action to prevent children seeing it. The algorithms can also learn ‘backwards’ – for example, once it sees that an exchange between two young adults ends in one sending the other a sexually explicit message, it looks back at the cadence of communication to learn the pattern that lead to the explicit content and help prevent this in future, removing the harm before it occurs.

The problem is the lack of transparency – according to the Huffington Post and Oxford University, putting this in place can often make a system less efficient, because it has to be slowed down enough to be overseen[3]. But I’m in complete agreement with Wired Magazine when it said, ‘it would be irresponsible not to try to understand it’[4] – after all, some of these systems are hugely powerful, have no moral compass and reflect the best and worst parts of the human condition without any concept of which is which.

My opinion is really very simple: no machine, no application, no algorithm should go untested or unsupervised, particularly in the period immediately after release or upgrade. You wouldn’t give a few days training to a junior member of staff and expect them to perform well without a manager, and algorithms don’t have the common sense or moral compass that new employees have.

Handling a crisis in AI

But if things do go wrong, how do you handle an AI crisis? Well, in many ways it’s no different to handling other crisis situations – just don’t be afraid of the complexities of AI. The first stage of any crisis, robot-fuelled or not, is understanding the situation clearly. Talk to the experts in the company where problems originated and don’t take no for an answer. After that, we’d recommend traditional crisis communications steps, including:

After the initial surge of adrenalin fades, it’s vital to keep monitoring the situation, assessing the impact, taking action and keeping an eye on the response across stakeholder groups, and across traditional and social media channels.

Above all, when you’re dealing with a machine crisis, the most important thing is to think like a human.

 

[1] https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters

[2] http://uk.businessinsider.com/ai-expert-explains-why-microsofts-tay-chatbot-is-so-racist-2016-3

[3] http://www.huffingtonpost.com/entry/how-ai-can-remain-fair-and-accountable_us_5934ec81e4b062a6ac0ad154

[4] https://www.wired.com/2016/10/understanding-artificial-intelligence-decisions/

“Robots are taking all our jobs!” “Robots are going to take over the world!” These are the kinds of daily declarations that have become the norm, with the likes of Stephen Hawking and Elon Musk even predicting that robots will be responsible for the downfall of the human race. (A more terrifying possibility, given a pair of robots recently joking about it.)

Though we hope it will never quite come to that, if you haven’t already, it is certainly high time to sit up and take notice. AI is totally disrupting our society and the PR and marketing industry is certainly not exempt from its effects.

Robots are already making huge inroads into the industry and it is clear why. If it all came down to a question of speed, robots would certainly have us humans beat, as many journalists have personally experienced when putting themselves to the test against their machine counterparts. AI offers the potential to have all news in real-time and with the likes of Wordsmith from Automated Insights, robots have been producing data-generated news, such as quarterly earnings reports and sports scores, for a few years now. And their capabilities are only getting better. AI can do far more than just collate the facts into good copy but can also analyse and even contextualise them. A future where press releases and basic news are automatically created is looking pretty certain.

Beyond data and analysis

Beyond creating content from data, AI’s ability to monitor and track your brand and its presence across social and online media far outweighs any human efforts – however big your team or PR agency may be. Moreover, AI can continuously analyse this data, looking for correlations and trends to offer critical marketing insights and better measurement of your metrics. All of which is helping marketing teams to make better, data-driven marketing decisions reflecting what buyers really want.

But, if reading this makes you think you should quickly start making plans to replace your agency and team with an army of robots, perhaps don’t be quite so hasty.

AI will, and already is, taking over routine (and rather tedious) tasks – which is really no bad thing –but there are still areas where robots fall a little short. The use of AI is simply opening up the possibilities for us to concentrate on our “human” strengths. Robots offer us useful insights, noticing tiny changes and details that are on a scale and level beyond our scope, but it is humans who can actually transform this information into something meaningful.

Old-fashioned human thinking

Robots can identify the type of content and channels preferred by your audience, but there is also a vital difference between the right content for potential customers and for those customers who have already purchased your product. We know that “Do you need this piece of software?” is markedly different to “How to install it”, but robots are still unable to make this distinction. To effectively produce and properly utilise varied pieces of content, it is essential to have a real understanding of the purpose they serve.

And this is an understanding that robots still lack.

In a similar vein, robots may be able to generate a standard press release but this is not the same as producing an in-depth advisory or opinion piece discussing the future of your business and industry. Almost ironically, to position your business and brand as a forward-looking thought leader, you need old-fashioned human input.

Robot and human relationships

Robots are not just limited in terms of content production, so are their creative muscles. When searching for a new PR agency, you are looking for the one that bowls you over with their team’s creative ideas for a brilliant campaign – the one that you know is going to propel your brand towards success. If robots are failing to try and create some motivational posters, I think there is little doubt that they will not be able to live up to this.

Most importantly, PR and marketing still revolve around people and relationship management. As humans, we are far better placed to read human behaviour than a robot. Robots can rationalise decisions and analyse consumers but the problem is that often we are not rational. Humans are irrational beings. How many times have you done or bought something because, well, just because?  We shouldn’t forget that we are marketing to humans, not machines. Robots can track patterns to an extent – helping us better understand why consumers tend to drop out of a sales funnel for example – but on an emotional level, humans most definitely still have the upper hand. There is a reason why charities share the personal stories of those they help, showing us harrowing images to encourage donations – they are playing to our emotions. Machines aren’t capable of tugging on our heart strings in quite the same way.

Robots may not be running businesses’ PR and marketing efforts any time soon but that doesn’t mean overlooking the opportunities they offer, rather we should embrace them, working together with AI. We do not need to go quite so far as becoming some sort of human cyborg as Elon Musk may suggest, but we should apply machine learning to our own intelligence.  The robots aren’t coming, they are most definitely here – and they are staying. They won’t be taking over (just yet), but only if we use them to improve and better our own skills and capabilities.

Is it time to shape your reputation?

We operate in London, Paris and Munich, and have a network of like-minded partners across the globe.

Get in touch

Sign up to Spark, our newsletter

Receive thought pieces from our leadership team, views on the news, tool of the month and light relief for comms folk

You can unsubscribe at any time, please read our privacy policy for more information