
Blog
Is AI compromising our curiosity?
As AI becomes increasingly prominent in our lives, it is time for comms professionals to consider the impact on creativity and curiosity. ...Read more
Until mid-September 2017, Facebook allowed ad buyers to target users who were interested in, or who had interacted with anti-Semitic content[1]. The categories were created by a self-selecting algorithm, which aggregated data based on user activity on pages and feeds, rather than individuals’ profiles, from each of its two billion active monthly users.
In March 2016, Microsoft launched an experimental Twitter chatbot to learn from other users, get smarter and eventually have conversations[2]. Just a few hours later, @TayandYou was spouting white supremacist, pro-genocidal content and was taken down in short order.
This raises two questions in my mind. Firstly, can machines be accountable for their actions? And second, what should you do as a communications professional if they do misbehave?
Why should we hold machines accountable?
Neural network-style systems are programmed and trained to reach outcomes, within certain parameters, such as not letting high-risk people buy insurance or creating a category for advertisers to target once a topic reaches a certain threshold of interest amongst users. These algorithms are usually very complex, as they have to process a significant amount of information about specific users, users as groups, external conditions and do a lot of calculations as a result. But once they’re trained within a reasonable degree of success, many organisations simply let them run.
The problem, as academics and journalists tell us, is that this learning can sometimes be a ‘black box’ that you can’t see inside.
It’s cheap labour; all you have to pay is the operating bill for the server.
Of course, not all algorithms are commercial – one of our clients, SafeToNet, is in the middle of creating algorithms that can detect harmful content online and take appropriate action to prevent children seeing it. The algorithms can also learn ‘backwards’ – for example, once it sees that an exchange between two young adults ends in one sending the other a sexually explicit message, it looks back at the cadence of communication to learn the pattern that lead to the explicit content and help prevent this in future, removing the harm before it occurs.
The problem is the lack of transparency – according to the Huffington Post and Oxford University, putting this in place can often make a system less efficient, because it has to be slowed down enough to be overseen[3]. But I’m in complete agreement with Wired Magazine when it said, ‘it would be irresponsible not to try to understand it’[4] – after all, some of these systems are hugely powerful, have no moral compass and reflect the best and worst parts of the human condition without any concept of which is which.
My opinion is really very simple: no machine, no application, no algorithm should go untested or unsupervised, particularly in the period immediately after release or upgrade. You wouldn’t give a few days training to a junior member of staff and expect them to perform well without a manager, and algorithms don’t have the common sense or moral compass that new employees have.
Handling a crisis in AI
But if things do go wrong, how do you handle an AI crisis? Well, in many ways it’s no different to handling other crisis situations – just don’t be afraid of the complexities of AI. The first stage of any crisis, robot-fuelled or not, is understanding the situation clearly. Talk to the experts in the company where problems originated and don’t take no for an answer. After that, we’d recommend traditional crisis communications steps, including:
After the initial surge of adrenalin fades, it’s vital to keep monitoring the situation, assessing the impact, taking action and keeping an eye on the response across stakeholder groups, and across traditional and social media channels.
Above all, when you’re dealing with a machine crisis, the most important thing is to think like a human.
[1] https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters
[2] http://uk.businessinsider.com/ai-expert-explains-why-microsofts-tay-chatbot-is-so-racist-2016-3
[3] http://www.huffingtonpost.com/entry/how-ai-can-remain-fair-and-accountable_us_5934ec81e4b062a6ac0ad154
[4] https://www.wired.com/2016/10/understanding-artificial-intelligence-decisions/
As AI becomes increasingly prominent in our lives, it is time for comms professionals to consider the impact on creativity and curiosity. ...Read more
Often, when communicating about technology, conversations can become quite technical. How can brands cut through the noise and communicate effectively? ...Read more
Barbie's reputation hasn't always been positive, but this year's Barbie film has changed the tides. What can this teach us about reputation shaping? ...Read more
We operate in London, Paris and Munich, and have a network of like-minded partners across the globe.
Get in touchReceive thought pieces from our leadership team, views on the news, tool of the month and light relief for comms folk
You can unsubscribe at any time, please read our privacy policy for more information