Facebook 'robot' moderators deployed on suicide watch
Facebook, in some parts of the world, has begun adding 'robot moderators' to the human moderating team that review comments alluding to suicide and notify the police. Do we feel comforted or threatened by this?
Being a bystander to the ultimate despair of another is painful, whether in the person or otherwise. Social media makes us privy to the intimate thoughts of strangers, close confidants or distant acquaintances in real time. I remember one evening, travelling as I often am, on an intercity train idly thumbing through twitter, finding that I had been @ed into a number of conversations about someone who others thought might be on the verge of taking their own life. I didn’t know the person and didn't recognise the twitter name. I searched for them and found that it was a celebrity who was having a terrible time. But what could I do? I didn't know them, couldn’t do anything more than send a supportive tweet. But the responsibility lay heavily on me.
"One of the most contentious elements of the new service is the fact that it can, and will, contact first responders such as police or ambulance crews to despatch them to the user's location."
Many of us are connected with hundreds if not thousands of people: a great tangle of golden ribbons spun across the globe knotting us together. Now we can see pain, confusion and hurt unfolding tweet by tweet, blog post by blog post, status update by status update.
The increased visibility of suicide has led to an increased thirst for solutions, all of which betray our fundamental discomfort with this painful subject. Suicide was once a hidden, shameful thing, something for which you could go to prison if you did not succeed. The legal legacy of suicide still haunts us. The Suicide Act 1961 opens with the sentences: “Suicide to cease to be a crime. The rule of law whereby it is a crime for a person to commit suicide is hereby abrogated.” We still persist in using the phrase ‘commit suicide’; a semantic hangover from when suicide was literally ‘self-murder’ with the victim and culprit as one.
Is every suicide preventable?
Because we no longer consider suicide to be a sin worthy of shame and silence, we have more people speaking in public who have survived their own actions to end their lives and more people who have lost someone to the tragedy of ending a life that could have continued. A coalition of such people, along with NHS bodies and businesses launched the Zero Suicide Alliance at the Houses of Parliament on November 16th, continuing work based on the idea that every suicide is preventable.
Part of the thinking behind the ZSA approach is that suicide is a tragedy which thrives in silence; in conversations never had, in the space where we are afraid to act and to intervene. Centrepiece of the ZSA launch is a free twenty minute video training course aiming to equip the wider public with confidence to recognise someone struggling with suicidal thoughts and provide on-the-spot help. At the time of writing, just under one thousand people, including your author, had viewed the course.
Suicide is awkward, so it is no wonder that we desire ways of doing the difficult work that do not get our emotional hands dirty. There have been previous attempts to create social media tools that support or intervene when someone appears to be feeling suicidal. Most prominent of these was the Samaritans Radar app which was briefly launched and then retired in late 2014 after a significant backlash. The app, which monitored individuals twitter streams without their knowledge or permission sent alerts to followers of an individual if the app deemed their tweets to indicate suicidal intent, prompting them to intervene. This, to many, felt like surveillance and to people with mental health difficulties who used twitter to discuss difficult experiences, like suicidal thoughts, it felt like an opportunity for well-meaning, or not-so-well meaning, strangers to jump into conversations unannounced offering uninformed advice. That we are so obsessed with automating the response to suicide also shows strongly our discomfort with the whole business.
Facebook wants to know - and say - how you're feeling
Last week, Techcrunch and other technology outlets reported that social media giant Facebook is beginning a global roll out of Artificial Intelligence-based tools that will detect suicidal ideation [thoughts or ideas] from a user's posts and automatically trigger moderator intervention. This roll out will not be extended to the EU, mainly because the EU is not keen on invasive services that collect data without explicit opt-in.
Facebook already has a number of ways in which users can flag content for action. The AI, which users cannot opt out of, is looking to flag content posted to Facebook that indicates suicidal intention. There are a number of options available to the human moderators, depending on the severity of the situation. The AI will be looking both at written posts and video. One of the most contentious elements of the new service is the fact that it can, and will, contact first responders such as police or ambulance crews to despatch them to the user's location. Since Facebook added its live video streaming service Facebook Live in 2016, there have been a number of cases where people used the service in conjunction with their own death; one of the most extreme examples of seeing tragedy play out in real time.
According to Techcrunch “the moderator can then contact the responders and try to send them to the at-risk user’s location, surface the mental health resources to the at-risk user themselves or send them to friends who can talk to the user.” So, if sufficiently worried, everywhere but the EU, Facebook could dispatch the police to your house. [Britain will transition out of the EU from spring 2019.]
This is the flipside of the argument for the ending of silence. If to even speak of feeling like you wish to kill yourself triggers an intervention with potential legal ramifications - such as the police called by a well meaning friend of a friend turning up at your door and detaining you under section - do we risk our benign motives driving talk of suicide back underground?
Shifting responsibilities and rights
The conversation about suicide has changed in the last fifty years. It has changed in recent memory and is continuing to change as part of our broader trajectory toward acceptance and alleviation of mental distress We are moving from seeing suicide as a personal tragedy to a preventable negative health outcome. Our attitudes, and the kinds of solutions we arrive at, still betray a deep discomfort and ambivalence around these tragic fatalities. This shift to real time visibility changes our sense of responsibility without necessarily improving our ability or bravery to intervene. We are given, in more stark terms than ever, the challenge: do I look away or do I intervene? And if I don’t, will a robot do the job for me?
The Samaritans can be reached 24 hours a day from the UK and Ireland on 116 123.
Mark Brown is former editor of 1 in 4 magazine and writes extensively about mental health and technology. @markbrown
Show your support for what you’ve read today. Enable us to keep finding and sharing the ideas that will better shape tomorrow’s mental health care.
Comments
Write a Comment
Comment Submitted