Can machines stop suicide?

September 27, 2017
ptrainor

Original Article Posted by Digital Agenda September 12 2017

In a week when attention has turned to the often-overlooked issue of suicide, we’re reminded that it remains the biggest single killer of UK men under 45. Experts have been testing ways to use artificial intelligence and machine learning to identify and help the men who need help most. So could tech intervene to save lives? Julian Blake reports.

Whichever way you look at it, it’s a shocking statistic. Suicide is the biggest single killer of men in the UK under the age of 45. Last year, according to official statistics, 4,287 male deaths in this country were registered as suicide.

That means that one man in the UK takes his own life every two hours. Men are three times more likely to end their life with suicide. When you add women to the total, it is little wonder that so many of us have been touched by this modern tragedy, with all the dreadful effects on those left behind.

At both a personal and societal level, the big question is: why? To address the issue, practitioners and policymakers have analysed the reasons – not least to support the near-million people who contact support services each year with suicidal feelings.

Every personal story is different, of course. But, official statistics and analysis from specialists like Samaritans tell us that divorce, isolation and poverty are important factors in driving people to such desperation.

There are strong arguments that men find it harder to express their feelings than women, are more likely to act on impulse and drink alcohol when depressed, and are more likely to carry around feelings of extreme guilt.

To respond to the challenge, specialist charities have emerged. These include – the Campaign Against Living Miserably – which was set up to offer support, challenge culture and support those bereaved by male suicide – and help bring the suicide rate down among men.

“We believe that there is a cultural barrier preventing men from seeking help as they are expected to be in control at all times, and failure to be seen as such equates to weakness and a loss of masculinity,” says CALM. “We believe that if men felt able to ask for and find help when they need it then hundreds of male suicides could be prevented.”

CALM says that, as a result of its work, 456 suicides were prevented in 2016. And the encouraging bigger picture is that there has been a decline in the UK suicide rate, following an increase between 2007 and 2013. That decline has been more significant for men than for women.

CALM operates a free and confidential phone helpline, operating 5pm to midnight every day. It also offers a parallel webchat service. Both are direct, human-driven services.

But could technology – in particular the opportunities offered by artificial intelligence and machine learning – also help to address the problem? Could they help break down some of the communication barriers that men seem to have when it comes to discussing their feelings? And could it help alert us to evidence of this most human of problems?

These are questions that London-based Ai product creator and thinker Pete Trainor, the co-founder of US Ai, has been trying to address, by bringing his firm’s expertise into the field of men’s mental health.

“In the technology world we recently hit an inflection point that’s going to give us a huge opportunity to do what I always dreamed we could do – help vulnerable people, even before they know they need help,” Trainor blogged last year. “It seems such an incredible waste not to use these wonderful, powerful technological advancements on tackling the bigger issues, rather than trying to sell more shiny things to people.”

Driven by this belief in the potential of Ai to help, last year Trainor’s team worked up an experimental product called, offering “a life support system to help men question, explore and find answers to the things that keep them awake at night.” Su (named as an inflection of US) provided a simple chat-based system that would be trained to ask lots of questions “to help you solve the reasons you feel the way you feel.”

Trainor’s team trod very carefully with Su, testing the product only in closed beta with 60 handpicked and vetted men to assess its potential to help in live cases. One of the key reasons for the caution was to do with unknown regulatory territory in using Ai to support real people with real challenges.

It is looking at the potential for larger-scale deployment of Ai with men’s mental health organisations, but before it could it would need to overcome any legal hurdles around producing and then codifying Ai-driven counselling advice. Instead, US Ai is working with experts in the field of men’s mental health to understand the potential.

“It’s working, but these things are slow,” concedes Trainor. “It takes years and years to train a system to understand the things that make us fragile and human. But what we are doing is starting to look at more innovative ways and methods of collecting enough human generated data to make the training quicker.”

Trainor believes that the rate of success for Ai relies on its abilities to access more and more data about people, so it can learn as much as possible about us, as fast as possible. The real potential for Ai to help address an issue like suicide, he says, lies in the predictive capabilities of Ai – its potential capacity to alert us, and people who can help us, before we even know we need that help.

Campaigners for ‘Ai for good’ argue that our digital footprints offer too many invaluable clues about our behaviour, including our mental health and risk of harm, to be left unexplored. Artificial intelligence, they say, is the only technology capable of processing this data.

If Ai can access the right kinds of data, says Trainor, it could potentially detect and then intervene to help men in crisis. In doing sentiment analysis of the notes that men contemplating suicide write, Trainor’s team found that there was almost always one sentiment difference that demonstrates someone’s intent to go through with actually taking their own life – ‘sense of burden’.

In using a bot like Su, then, if it asks lots of questions and detects a sense of burden in its analysis, then it could flag a real individual issue, and alert a human to intervene.

The ethical question that Trainor and his team are wrestling with is whether monitoring people for signals of feelings like a sense of burden is a violation of their privacy, especially if you don’t tell them up front that you are doing that. If a person using a bot knows that the Ai is looking for clues about their vulnerability, it could undermine the whole exercise.

“You’d need to tell them up front what cues and ticks the Ai is monitoring for and that would totally defeat the purpose of using it,” says Trainor. “Men especially would change their behaviour – it’s called the Hawthorne Effect – when they feel like they’re being monitored – and they just wouldn’t use a service to support them.”

Trainor believes there is also potential for training Ai to understand human behaviour, say by monitoring wifi networks to look at the data on people’s phones connected to that network. “An Ai could absolutely look through chats, emails, notes, social media posts and all the other passive data footprints people have on their phones and look for that sense of burden, for early warning detection,” he suggests. Or it could look for people whose digital footprint suggests a certain behaviour, then map it against suicide hot spots on train lines.

Big practical, and ethical questions remain, of course, not least around privacy, and the onus of responsibility to respond.

“If you allow Ai to access networks like that,” Trainor asks, “who’s responsibility is it to intervene? Or is it nobody’s role?” If all of that sounds creepy or invasive, Trainor suggests, perhaps we should think about the data being collected about us already, and which few of us challenge.

“They monitor people’s browsing behaviour now, and for what? Sales and marketing opportunities. That’s arguably a far less moral use of our private data than giving it to Ai for our own good.”

However the tech starts to be deployed in the years ahead, it’s clear from the shocking headline stats that the human need exists. It’s also clear that the desire is there among many in industry to deploy Ai for good purpose. It’s an area of research that’s certain to continue.

It’s also clear that we have a big opportunity in front of us. Ai assistants are like Amazon Echo and Google Home are becoming ever-more ubiquitous. Once they start to have life-support baked into them, as they surely will, the chance for those to do the monitoring, counselling and support are profound. The Amazons of the world just have to step beyond their consumer-driven business models, and take responsibility.