By Ezra Elias Vivas
Image via Goh Rhy Yan for Unsplash
Nowadays, people are much more open about mental health issues. This has led to less stigma and more open discussion. It also facilitates helpful communication between people with mental illness.
One of the most widely used, yet controversial, aids for trauma survivors is the ‘content warning’. There are some differences in connotations and exact definition between ‘trigger warning’ and ‘content warning,’ but this article will use these terms interchangeably. I’d like to focus on trigger warnings on social media, where many people spend time connecting with others and sharing information on an unprecedented scale.
Some social media sites, such as Tumblr or Twitter, have ways to filter or mute certain words or tags, whereas others, such as Instagram or TikTok, do not. While Instagram does have a ‘sensitive content’ filter on some posts (TikTok does not), there is no way for individual users to filter out content that could be triggering. Therefore, it is the responsibility of its users to help each other out and give content warnings.
Part of this is simply common courtesy. Very few people want to see graphic injury as they flip idly through Instagram stories. It’s important to spread awareness of current events, and it’s important to talk about sensitive or tough issues, but it’s also important to bear in mind how you’re talking about these issues.
There have been several trends on TikTok in which survivors of trauma talk about their experiences, oftentimes in detail. This can be very empowering for people partaking in these trends. It can raise awareness and end the stigma for fellow survivors.
However, it can also trigger and re-traumatise other survivors to hear details about traumatic experiences similar to theirs. This is where user-provided content warnings come into play. Angus Johnston, a professor at Hostos Community College in CUNY explains how “these warnings prepare the reader for what’s coming so their attention isn’t hijacked when it arrives.”
Many people argue that it’s not their responsibility to ‘coddle’ people, or that content warnings impede their freedom of speech. This is insensitive in two ways. Firstly, by refusing to accommodate people with trauma, you have shown a clear disdain for them. You have decided that the ability for others to choose to avoid content that reminds them of their trauma is not something that matters to you. Secondly, freedom of speech is entirely unrelated to what private individuals ask you to do. Freedom of speech protects you from the government, not from people on social media telling you to use content warnings.
Other people might mention exposure therapy in their list of reasons to not bother with content warnings. Whilst repeated exposure therapy can be an effective way to help people recover from trauma, it’s not the role of anyone other than the person with trauma or their therapist to decide the time to engage in this. According to the Maryland Coalition Against Sexual Assault, a trigger warning “does not prevent survivors from being able to engage with content that may be challenging for them. Instead, it lets them knowingly gain exposure at their own pace and on their own terms.”
If you’re wondering how to properly give content warnings, you’ll be happy to know it’s pretty simple. Firstly, identify if there’s something in what you’re posting or sharing that might need a warning. Use your best judgement, but a good rule of thumb is if what you’re posting about would be traumatic to experience, it probably warrants a content warning. Next, figure out how you can put a warning where it will be visible. On Instagram posts, the first (two, sometimes, because Instagram sometimes automatically swipes through the posts as you scroll) slide of a post can list the kinds of triggering content, and then the rest of the post can be dedicated to the issue at hand. If you’re using Instagram stories, providing text in an easy-to-read, eye-catching text box is a good way to help. If you’re sharing a post onto your story, flip it upside down if the immediate content being shared is triggering (such as an image or text without other slides). On TikTok, using text boxes that cover the screen for a brief period of time gives users the chance to pause the video and read the warning. Third, be specific. Someone who has trauma associated with X but not Y isn’t helped when you say “This post might be triggering.” That warning is vague and unhelpful. Instead, try “This post has discussions of X,” or “TW/CW: Graphic depiction of Y.” This allows people to make informed decisions on the information they’re taking in.
Additionally, asking social media platforms to include ways to mute, blacklist, or filter content or tags in specific, user-oriented ways can give choices and power back to the people who use these apps. This will also have the curb-cutter effect of allowing people to filter posts and content unrelated to trauma. If, for instance, Instagram allowed users to block specific tags, such as “#TheUmbrellaAcademy,” you’d be less likely to see spoilers for the latest season. Everyone wins here!
The discussion surrounding mental health is much more open and less stigmatized today than it was in times past. This is partly due to the unprecedented levels of communication created through social media. However, there is still a long way to go to provide full support to those who struggle, especially on our apps and media. Do you want to provide tangible help to trauma survivors? Here’s one easy way you can.
Written by writer Ezra Elias Vivas
Comments