We can find 'our people' on social media - but not if mental health content is over-regulated
10 June 2019Instagram is hiding posts for #depression to "protect its community from content that may encourage behaviour that can cause harm or even lead to death". Mark Brown assesses the unintended consequences of emerging social media policies.
Over the last decade and a half social media sites such as Twitter and Facebook have been ways in which people with mental health difficulties have extended their social circles, met other people with similar experiences and found a new collective voice. New proposals for the regulation of social media in the UK run the risk of destroying this space where people speak frankly, candidly and with grace, humour and kindness without the need to hide the bleaker, more disturbing aspects of life with mental health difficulty.
"What makes social media content so valuable to people who actually live with mental health difficulties can be what makes it so disturbing to others who do not live with the same challenges. If you can’t search for ‘depression’, how would you find others who have similar experiences?"
This year began with Secretary of State for Health and Social Care Matt Hancock writing to social media companies, warning them to remove suicide and self-harm related material. "It is appalling how easy it still is to access this content online," he wrote, "and I am in no doubt about the harm this material can cause, especially for young people. It is time for internet and social media providers to step up and purge this content once and for all."
In April the UK government published the Online Harms White Paper, a policy document, setting out plans to curb the potential for social media platforms to cause wider social harms. It proposes many sensible changes that would prevent the use of social media for the promotion of disinformation, criminal activity and harassment. It defines ‘encouraging or assisting suicide’ as a harm with a clear definition and ‘advocacy of self-harm’ as a harm with a “less clear definition”. The white paper proposes a new regulator; a duty of care to users; “civil fines for proven failures in clearly defined circumstances” and the possibility of senior management liability which “could involve personal liability for civil fines, or could even extend to criminal liability.” What’s important to note is that the proposals in the White Paper are about about limiting the spread of self harm and suicide promoting content, not helping the individual creating or sharing it.
Seeking solidarity or support
One of the transformative elements of social media for people who experience mental health difficulties has been the potential to find others who have had similar experiences. For this to happen, content must be shared in public. What makes social media content so valuable to people who actually live with mental health difficulties can be what makes it so disturbing to others who do not live with the same challenges. What we discuss with our peers on social media and how we discuss it is heavily dependent on context for its meaning. One person’s appeal for support or sharing of current feelings can be another’s promotion of unhealthy ideas or even encouragement to replicate them.
It’s tempting to picture a room full of people somewhere diligently reading, watching or listening to everything that appears on social media. Each day social media platform such as Twitter, Instagram, YouTube or Facebook publish more content than it would be possible for a person to review in a lifetime or hundreds of lifetimes. As individuals our response to any online material is guided by our knowledge of the intentions of the poster and our ability to read nuance. To social media platforms wishing to maximise users and profit and avoid prosecution and fines, it’s all just content.
The sheer volume of posted material means that social media platforms have a limited number of options in moderating content. Nuance and scale do not mix. Cracking down on prescribed content will likely combine beefing up user-reporting measures while also making structural or technical changes.
The largest and quickest structural change a platform can make is to change the terms of service, such as including a clause that says ‘users may not promote or be seen to promote suicide or self-harm’; justifying any subsequent action the platform takes in response. Structural changes might also involve altering the way in which a platform itself functions or ways it responds to certain forms of content. Platforms for young children sometimes make it impossible to type certain words in messages. Other platforms have experimented with blocking outward links to certain domains.
Facebook has undertaken many experiments with altering what content its news feed actually displays to users, most notably in 2014 when it revealed it had been experimenting to see whether what it showed users changed their emotions. YouTube is currently facing criticism for the way its recommendations suggest videos of young children to users who had been watching sexually themed content. Elsewhere, Instagram is ‘hiding’ posts tagged #depression. A search for this hashtag delivers the message “We’ve hidden posts for #depression to protect our community from content that may encourage behaviour that can cause harm or even lead to death” followed by a link ‘Get Support’.
Other solutions available to social media platforms rely upon mechanisms to ‘detect’ material considered problematic and then to take action to remove it or the accounts that have posted or shared it. Detection based on words alone is notoriously difficult to get right. In 2014, the Samaritans Radar app used searches for particular words in tweets a user followed on Twitter to then notify the user to intervene and offer support. It was retired after one month after protests about privacy and surveillance, many from people with mental health difficulties themselves.
At present, Twitter offers the option to report an individual tweet as ‘abusive or harmful’. A subsequent page allows users to select the option "This person is encouraging or contemplating suicide or self-harm". Users then specify that the person who sent the tweet is ‘potentially in danger’, who is then sent an auto generated email directing them to The Samaritans to which they cannot reply. An individual might receive many such emails if they are considered at risk in the eyes of others. In 2017, Facebook began the international roll-out of AI assisted content moderation to detect suicidal ideation. This flags suspected content for action by human moderators. Outside of the EU, this can be escalated all the way up to calling authorities such as police.
Social media platforms contain public material that dates back years. The best guide we have to what a large scale response to a huge amount of historic content on a social media platform might look like is Tumblr’s response to a change in US law. In March 2018 US Congress passed the Fight Online Sex Trafficking Act and the Stop Online Sex Trafficking Act (FOSTA/SESTA), a statute that gives prosecutors more powers to tackle sex trafficking. This act made social media platforms legally responsible for the actions of their users. According to WIRED magazine, Tumblr, a home for much sex-related content, acted by using AI to flag and remove “photos, videos, or GIFs that show real-life human genitals or female-presenting nipples, and any content—including photos, videos, GIFs and illustrations—that depicts sex acts.” This led to many incorrect decisions: “Classical paintings of Jesus Christ were flagged, as were photos and GIFs of fully clothed people, patents for footwear, line drawings of landscape scenes, discussions about LGBTQ+ issues and more.”
The largest risk for people with mental health difficulties is feeling forced back into the shadows for fear of falling foul of rules about what we are allowed to say in public. While social media platforms feel like public spaces, they are private entities that rely on the usage of the public for their revenues. At present, it seems that the big platforms are reluctant to limit further the freedom of people with mental health difficulties to discuss and share, but this is part of the same position that is reluctant to implement any further controls on what people post and share.
The danger of the proposals in the Online Harms White Paper is companies may take decisions that erode or remove the confidence of people with mental health difficulties to share with freedom, and the ability to search for and find others with similar experiences. If you can’t search for ‘depression’, how would you find others who have similar experiences? People with mental health difficulties are already in a position where their posts might be hidden, reported or flagged by others discomforted by what they share or reveal. We can understand nuance when we see individual posts. Any solution that looks to deal with millions of posts will lack nuance or will be expensive. The easiest response to such material is to get it away and out of sight as quickly and efficiently as possible. This means either banning users who post content deemed unacceptable or removing the content itself. Even a very clever at-scale solution has the potential to be very stupid in practice.
Given a sufficient legal threat of liability, it is possible that ‘a purge’ as suggested by Matt Hancock might be implemented without nuance. People who live day to day with mental health difficulties are not a powerful lobby and, ironically, it has often been social media where our voices have been strongest and most authentic. Where proposals for a crackdown might see risk, we have found opportunity.
The consultation on the Online Harms White Paper closes at 11:59pm on 1 July 2019.
Comments
Write a Comment
Comment Submitted