The robots have arrived in ethics

The robots have arrived in ethics

Christian Sharp

Christian Sharp

Until mid-September 2017, Facebook allowed ad buyers to target users who were interested in, or who had interacted with anti-Semitic content[1]. The categories were created by a self-selecting algorithm, which aggregated data based on user activity on pages and feeds, rather than individuals’ profiles, from each of its two billion active monthly users.

In March 2016, Microsoft launched an experimental Twitter chatbot to learn from other users, get smarter and eventually have conversations[2]. Just a few hours later, @TayandYou was spouting white supremacist, pro-genocidal content and was taken down in short order.

This raises two questions in my mind. Firstly, can machines be accountable for their actions? And second, what should you do as a communications professional if they do misbehave?

Why should we hold machines accountable?

Neural network-style systems are programmed and trained to reach outcomes, within certain parameters, such as not letting high-risk people buy insurance or creating a category for advertisers to target once a topic reaches a certain threshold of interest amongst users. These algorithms are usually very complex, as they have to process a significant amount of information about specific users, users as groups, external conditions and do a lot of calculations as a result. But once they’re trained within a reasonable degree of success, many organisations simply let them run.

The problem, as academics and journalists tell us, is that this learning can sometimes be a ‘black box’ that you can’t see inside.

It’s cheap labour; all you have to pay is the operating bill for the server.

Of course, not all algorithms are commercial – one of our clients, SafeToNet, is in the middle of creating algorithms that can detect harmful content online and take appropriate action to prevent children seeing it. The algorithms can also learn ‘backwards’ – for example, once it sees that an exchange between two young adults ends in one sending the other a sexually explicit message, it looks back at the cadence of communication to learn the pattern that lead to the explicit content and help prevent this in future, removing the harm before it occurs.

The problem is the lack of transparency – according to the Huffington Post and Oxford University, putting this in place can often make a system less efficient, because it has to be slowed down enough to be overseen[3]. But I’m in complete agreement with Wired Magazine when it said, ‘it would be irresponsible not to try to understand it’[4] – after all, some of these systems are hugely powerful, have no moral compass and reflect the best and worst parts of the human condition without any concept of which is which.

My opinion is really very simple: no machine, no application, no algorithm should go untested or unsupervised, particularly in the period immediately after release or upgrade. You wouldn’t give a few days training to a junior member of staff and expect them to perform well without a manager, and algorithms don’t have the common sense or moral compass that new employees have.

Handling a crisis in AI

But if things do go wrong, how do you handle an AI crisis? Well, in many ways it’s no different to handling other crisis situations – just don’t be afraid of the complexities of AI. The first stage of any crisis, robot-fuelled or not, is understanding the situation clearly. Talk to the experts in the company where problems originated and don’t take no for an answer. After that, we’d recommend traditional crisis communications steps, including:

  • Communicate clearly with stakeholders; let all relevant groups know what has happened in clear, comprehensible language
  • Show an appreciation of impact; demonstrate that you understand who this affects and how much
  • Let people know what you’re doing specifically to remedy this situation and how this affects your machine learning / AI strategy in general
  • Provide technical context: how long has this specific AI been in use, how new is it – essentially, is this a surprising development?
  • Provide positive context: what’s your vision for how well this piece of technology could work in future and how it could improve people’s lives when it works correctly
  • Don’t blame the machine!

After the initial surge of adrenalin fades, it’s vital to keep monitoring the situation, assessing the impact, taking action and keeping an eye on the response across stakeholder groups, and across traditional and social media channels.

Above all, when you’re dealing with a machine crisis, the most important thing is to think like a human.






Share this story:

Read more from the blog


Does cybersecurity have a comms problem?

Cybersecurity is crucial in today's world. However, many people still struggle to understand what it involves - is poor comms to blame? ...Read more

Alexandra Kourakis
Alexandra Kourakis

Avoiding the pitfalls of impulsive PR

While it is tempting to be part of the hottest media topics of the day, businesses should be wary of getting involved in conversations they really shouldn’t be ...Read more

Matthew Healey
Matthew Healey

Setting Sail: Navigating the Business Seas with Reputation

In the vast and often turbulent waters of the business world, organisations must navigate through ever-changing currents, unpredictable storms, and shifting tides. ...Read more

Selina Jardim
Selina Jardim

Add a comment

Time limit exceeded. Please complete the captcha once again.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Is it time to shape your reputation?

We operate in London, Paris and Munich, and have a network of like-minded partners across the globe.

Get in touch

Sign up to Spark, our newsletter

Receive thought pieces from our leadership team, views on the news, tool of the month and light relief for comms folk

You can unsubscribe at any time, please read our privacy policy for more information